Information Theory and Projections

I am an amateur aficionado of information theory from a purely conceptual standpoint. For instance, for many years I’ve had an interest in both data encryption and data compression, which as it turns out go together like peas and carrots.

Prior to plaintext data being encrypted, applying a compression algorithm to reduce the amount of redundancy can make the ciphertext more difficult to crack by increasing the unicity distance. Early on, an easy way to remove redundancy was to remove vowels from the plaintext prior to encryption. The plaintext was still readable and its meaning was conveyed, and therefore no information was lost. In other words, the vowels were redundant, and removing redundancy strengthened the encryption.

Periodically on Reddit I stumble upon stunning art pieces that the community found interesting enough to vote up. Tonight one of them was a collection of shadow art. Clearly the artists are wonderfully talented, and yet I cannot help but see beyond the art itself to be intrigued by the underlying mechanism in which the original message (plaintext) was hidden and revealed.

Every piece appears to be random bends in wire, trash haphazardly clumped together, or blocks or boxes strewn about. At first glance we would attempt to visually decode – to discover the pattern within those objects and comprehend the artist’s message. In fact, we might lead ourselves down an entirely different path of interpretation (e.g. assuming the heaped trash is conveying a statement about our society). However, only until a source of light illuminates the piece from precisely the correct X,Y,Z coordinate do we discover the plaintext is deciphered before our eyes as a two-dimensional projection from three-dimensional space.

Redundancy helps

In many of these art pieces, there is a high degree of redundant information that actually works to strengthen the encryption. For example, in art piece where trash was heaped together, the labels and wording on the trash items obfuscate the artist’s message by leading you down a path of inspecting the trash itself.

Another art piece with letters and numbers glued to the wall might cause the viewer to try to assemble words to understand the message. In fact, if the letters actually assembled words, they could lead the viewer to conclude she successfully deciphered the message when in fact she has not.

Decoding through a dimensional projection

Arguably the ultimate redundancy is the dimensionality of the art piece. The fact that each piece is created in three dimensions entices the viewer to inspect the piece (or individual components thereof) for meaning without considering the plaintext message exists outside of the piece itself. The key that will decipher the art piece to yield the plaintext is the X,Y,Z coordinate of light source with respect to the art piece’s location.

What I find interesting is that general mechanisms of conveying secret information tend to:

  1. Encrypt the plaintext message into something generally unreadable by anyone but the intended recipient (via some key), and/or

  2. Hide the unencrypted message in plain sight, relying on surrounding noise to distract everyone but the intended recipient who understands where to find the message and meaning amongst the noise.

For shadow art, I cannot easily lump it into either category because the piece does not contain the message as plaintext nor ciphertext. Only by projecting it down a dimension with a unique key (x,y,z) can it be deciphered. It’s more closely related to an encryption, but without an algorithm required to decode. Just shine the light from the right spot, and there’s your message. It’s as if the art piece itself has the encryption and decryption algorithm built-in, so applying the key yields the message immediatly.

So what now?

Like many people, I think in analogies. I see how this art obfuscates a message that can only be revealed in a dimension below its own. I feel like there’s an analogous type of encryption that could be applied to textual messages. As I said, I’m an amateur, but I’ll continue to contemplate it.

Drupal for Content Management of a MEAN Application

It’s no secret that I am a fan of Drupal for its plumbing that yields out-of-the-box stability, security, and configurability. Using Drupal and an assortment of contributed modules, it’s relatively easy to quickly create most any type of website.

Before learning of Drupal 8’s RESTful services, I questioned if there were any natural scenarios where Drupal could integrate with a MEAN stack-based application. One idea which I ended up modeling was using Drupal for a pure management console of data in a Mongo document-oriented database.

Example: Job Board

Drupal has the notion of Content Types that define and separate…well, logical types of content on the site, including the fields that can be filled in for each content entry (called nodes in Drupal). For instance, consider a Job content type that could be a basis for creating a job board. Fields for a Job might include the position title, job description, educational requirements, start date, and so on. With the content type defined in Drupal, any new Job nodes are stored in Drupal’s database.

Let’s say we want to manage (create, edit, delete) Job nodes in Drupal (taking advantage of Drupal’s account creation, permissions, security), but allow a light-weight mobile application to access and display the job listings. Ideally the data is accessed by the mobile app via JSON queries, and for this example let’s say the mobile app is written in Ionic (and Angular-based framework), and the server is written in Javascript running on Node JS.

From MySQL to MongoDB

Drupal uses a relational database like MySQL or PostgreSQL to store ALL of its data, including content, configuration, users, permisisons, etc. Instead of developing the Node server to access Job content from Drupal’s MySQL database, I found it much easier to extract the Job data out of MySQL and into a Mongo database for access by the Node server. This is done in real time by Drupal as Job content is created, updated, and deleted.

It is fairly easy to one-way synchronize (from Drupal to MongoDB) the job listings by writing a Drupal module. Using the Drupal hooks to trigger when Job content is created, updated, or deleted, the Drupal module executed the same operation on the Mongo database’s collection of job listings. In the MongoDB document, I captured the Drupal field data as well as some metadata, including the Drupal node ID and creation date. Then later when the Job content in Drupal is updated or deleted, the node ID could be searched in the MongoDB collection to perform the same operation.

Granted, the flow of data is one-way (from Drupal to MongoDB), and it assumes the mobile app will not be modifying the Job postings and synchronizing those changes back to Drupal. However, if two-way data flow is desired, either a call-back mechanism or polling (via Drupal’s cron hook) could synchronize new data back to the Drupal database.

Why?

While developing this example, I realized some good reasons why this implementation is interesting:

  1. CMSs like Drupal really provide two types of content services: (a) content management, and (b) content viewing. The same website/CMS code base and infrastructure supports both interactions. However, for many site implementations, the predominant type of user interaction is skewed heaviliy in favor of content viewing. It turns out that a sizable amount of code has to be executed for every interaction on the site, even if it’s just to view the data. This can be a non-trivial amount of overhead, especially for high-traffic sites (even with various levels of caching). Once the content is created, if the audience who consumes it does not need to modify it, I assert it’s hard to beat the transfer time, data size, and server-side processing power required to process JSON/MongoDB query via a RESTful Node server.

  2. Drupal excels at content creation and management (not to mention user management, security, etc.). If you’re developing a primarily mobile application (even without user-facing website), you can leverage the vast number of contributed Drupal modules that support content creation (e.g. RSS import) for your mobile application without a great deal of development effort. Then you can focus your time developing your mobile app instead of developing the mechanism and interface for getting external content into your database.

Create a Chromebox with Peppermint Linux

Over the past fifteen years, I have needed to upgrade my computers less and less. In the late ’90s through early 2000’s, every couple years my motherboard/CPU/memory were so horribly out of date that the latest software updates almost begged me to upgrade. However, I built my last PC over five years ago (ASUS board, quad-core AMD processor @ 2.4GHz, 4GB memory), and it continues to steam along through every OS I have installed. In fact, even single or dual core PC’s that date back to 2006 still have plenty of life in them. The reason is simple: computers have steadily become fast enough for the basic applications we use daily — email, web browsing, and office applications.

Of course, back in the mid-2000’s, there was no such prevalence of the cloud, much less any type of browser-based applications that lived within it. Applications and their data were stored on the PC, and thus required sufficient local horsepower and storage. Jump ahead to 2014 — it’s a very different story. Dropbox or Google Drive synchronize your data into the cloud, and Google’s suite of office applications are good enough for most day to day activities. With the web browser becoming the most-used application, the requirements on a PC that is already fast enough are minimal: you can conceivably get by in life with only a web browser. In fact, the bulkiest “application” that slows down the machine could be considered the operating system itself! (Looking at you, Microsoft.)

This is the premise of Chromebooks or more recent Chromeboxes: design hardware and the OS to support a web browser in which the user does everything, and replace the standard desktop or laptop for a fraction of the cost. Companies like Google, Samsung, and ASUS are starting to sell these systems based on Chrome OS, a Linux-based variant. However, if you already have an older PC laying around, why shell out more money when you can repurpose it as a veritable Chromebox.

Peppermint is a Linux distribution that was developed under the same premise as Chrome OS, and is freely available. I discovered Peppermint a year ago when switching to Linux Mint as my distribution of choice. Mint is based on Ubuntu (itself stemming from Debian Linux), and in my opinion creates a more familiar and user-friendly desktop look and feel than Ubuntu. Peppermint, like Mint, produces a similar, familiar user experience, but aims at minimizing its own footprint so it run speedily on low-memory, lower-performance (older) systems. While Peppermint bundles some basic applications (including Dropbox), you could argue its primary app is Chromium, the open-source web browser project behind Google Chrome. More applications can be installed, but the baseline configuration is perfect for targeting a system that uses the cloud for productivity (e.g. Google’s office applications). In short, Peppermint has become my favorite go-to operating system, especially when breathing new life back into 6-8 year old hardware.