QR Codes as the Interface: A Scan-to-Vote App Using IFTTT and Google Sheets

In Defence of QR Codes

If QR codes aren’t being used effectively, it is more the fault of the publisher than the technology. The technology, despite its looks, is quite elegant and offers a lot of opportunity. However, there are some pretty significant barriers to adoption and use. So the burden is upon the publisher to use QR codes in the right context and provide a compelling reason to scan.

Mona Lisa QR Code

Some contexts are better than others. As this discussion illustrates, places where written language is not based on Basic Latin characters, like much of Asia, QR codes can be used to help users avoid quite a lot of mobile typing. The beauty of QR codes, a type of character encoding in their own right, is that they are universal. They are like high fidelity emotions; a lot of meaning packed into a small print expression. This universal characteristic is why I decided to use both together in the example below.

From a pure technological standpoint, QR codes offer quite a lot. They can send up to several paragraphs of text or complete contact information. They can also trigger phones to send SMS messages, download apps and open web pages. (No wonder marketers are so eager to use them.)

The trigger to open webpages is a useful feature but publishers’ use of this trigger have fallen short of its potential. A smartphone’s web browser is much more than just a content browser. It is a interface to the blossoming world of Web 2.0 Web services. This is the unrealized potential of QR codes.

QR Codes as Application Interfaces

The technology, on the surface, is simple, it’s utility is elegant, and it’s recognition is universal. It is, in essence, just a medium but its capabilities are broad. We just have to think within the right application.

The Web is your Application

Most of us use our mobile Web browsers to request documents from Web servers (most often HTML web pages) or interact with Web apps. And that is traditionally what QR codes have been used for. But if we think of a mobile web browser beyond just a medium for viewing documents and more a means of sending HTTP requests, there is a lot of World Wide Web territory that opens up.

Beneath the document-based surface of the Web, there is a lot happening. There is a lot of data being passed around from place to place. Most often this happens as data is requested and sent from an API to dynamically update a web page (think of your favorite travel booking site) or connect one website or service to another (think of Twitter streams on web pages).

See what Web Services are available via IFTTT

These API’s are, very often, available to authenticated users of the service or even the public. There are API’s for updating the price of items on eBay, posting to Facebook, getting the current weather conditions, or getting the structured data from your favorite Reddit page. And all of these API’s and many more, are RESTful, meaning that sending a request to a specifically structured URL will return some structure data or tell a service to do something for you. This is where mobile web browsers come in.

The QR Code is your Interface

So all we have to do to make a web service do something is send a request to a URL. And all we need to trigger a request to a URL is scan a QR code that contains that URL. This means that scanning QR codes becomes a physical interface to the power and connectedness of the digital world. It is now the same as pressing buttons to control the Web!

Cat Button

The question is now, what would you want this button to do? Depending on what you can connect to, this could be everything from light switch, a messaging service, an alert system, a doorbell, some kind of Kanban system … or a voting system.

Why not Mobile Apps or IoT Hardware?

If you’ve gotten this far and you’ve done more than just scanned for cat Gifs, you are probably asking, “Why not use a Mobile App or IoT hardware where you can literally use a button as a trigger?”

The answer is cost and ease. Their is a huge cost to DIY mobile applications and connected devices. Using QR codes is as simple as generating the codes and printing the papers. Again, the context must be considered. QR codes as triggers are probably best used where use is passive and takes place over a short timespan making investment in an app or hardware infeasible.

Also persuading the download and use of mobile apps is a massive barrier. The same could be said for QR code scanner apps but for most applications, QR code scanner apps are more universal than any application that would offer the same functionality. Additionally, in places where QR codes have a stronger value proposition, like Asia, (see above) QR code scanners will be found in apps that are already commonly used, like WeChat.

Uniquely QR Codes

There is also an interesting combination of properties of QR codes that can make or break them as triggers. If the publisher uses QR codes with intention, these properties can become a features of the technology rather than a weakness.

  • Contextual – Users have to be in the same physical location of the QR code to receive or send the data that the QR contains. In a way, the location of the QR code acts as an implicit authentication. If and only if users are nearby can they scan the QR code.
    • Inherent barrier to use – This one is difficult to view as a feature but the barrier could serve as proof of motivation. This barrier can allow us to infer some things about their user and/or the strength of their motivation. Digital fluency, to some degree, is required. There is also a measure of curiosity and/or motivation that is needed.
  • Visual – You know them when you see them and they are their own call to action. They are the same as putting a big link on a page that says “Click Here!”
  • Coded – Again, they hold more information or instructions than one could ever hope to print in the same space and have it make sense. They also allow for long, dynamic URLs or if you so choose, intentional misdirection.
  • Can hold and send information about their context – Depending on how dynamic the publisher chooses to be in creating QR codes, the codes could contain an element of mass customization. The QR code’s URL could send the destination URL (web page or web service) information about the context of the QR code. This could allow the receiver of the Web request a customized response or action in the same way that a search query on your favorite ecommerce site provides customized information on the following page.
  • Duplication – Ok, it’s really hard to see how this could be anything but a weakness. It is somewhat difficult to ensure that a URL is not requested twice, either by scanning a code twice or browser refresh.

Motivating this Example

The intent is to demonstrate two things: dynamically generated QR codes that hold contextual data and web requests to a web service rather than a web page. To do this, I decided to create a QR code ballot where votes are tallied in a cloud service.

There have been other applications of QR codes for voting. In most of these, the QR code provides a link to an online form (hopefully mobile-friendly) where voters can enter their responses. Unlike these applications, in this version, scanning the QR code is the actual act of voting. This infrastructureless mashplication is also free and a MVP of what could be done.

Building a Prototype

For a test case, I used a project that has been happening in my neighborhood for a couple years called HK Walls. This project pairs wall donors to artists from around the world to create awesome public art. At the moment, each of these walls, has nothing to signify that it is part of the project or the project itself.

The HK Walls project does fit perfectly into this voting idea because it is location-based and inherently mobile (it has its own hashtag). It is also clearly visual and lends itself to normative voting. Art is either the perfect or the worst use case for this but this is just a test. There are plenty of other applications for this if you use your imagination.

To build the prototype, very little is needed; just Google Spreadsheets, IFTTT, a B/W printer and some spare time.

Step 1: IFTTT Recipe Setup

This step provides the base URL for the IFTTT Maker channel. It even creates the spreadsheets for all future steps. See my recipe.

IFTTT Recipe: QR Code Scan to Vote connects maker to google-drive

Step 2: Dynamically Generate Lots of QR Codes

I used Google Spreadsheets and Google Charts’ QR code API to generate the QR codes. Each QR code was dynamically generated by using Google Charts’ QR code API to encode a combination of an IFTTT Maker Channel URL, the location of the HK Wall and the variant emoji. The result was 88 QR codes with different contextual (location) and normative (emoji) information.

Voting With QR Codes

The normative emojis are placed above each location’s row of emojis so that they can be cut out with the set of QR codes. After finishishing step 3, I added a column of QR codes that would send scanners back to the chart.

See the Google Spreadsheet QR Code Generator for Voting

Step 3: Setup and Publish the Results Chart

The fifth column of QR codes holds a QR code that links scanners to the aggregate results of the poll (average rating values for each wall). This is a simple chart based on a pivot table of average scores. The chart was also published so it could have its own URL for scanning.

The Google Chart URL shows this on a mobile browser:

Step 4: Print the QR Codes and Rock the Vote!

If I were to do this in real life, I would print these all out and paste them up on the walls. But it is not my place to facilitate judging of free public artwork. That seems to oppose the spirit of the project. I encourage you to though to use this for your own Democracy!

Use Your Imagination

I had to see this was possible. I had to know that QR Codes could control the connected web. I had to see that it could scale with minimal technical know-how.

I am satisfied with the result, I think there is some testing to do, (I am still thinking of ways to gather some actual data of QR code use in Hong Kong.) and there are many applications yet to explore. I hope that this might motivate your curiosity to find what’s left to find.

Is Slack Messenger Right for My Team? Analytics and Answers

From AOL Instant Messenger to WeChat stickers, digital communication has always fascinated me. From the beginning, there has always been so much we don’t understand about digital communication. It’s kind of like GMO; we just started using it without considering the implications.

We are continually learning how to use the the digital medium to achieve our communication goals. And meanwhile, our digital communication tools are ever evolving to better suit our needs. A prime example of this is the team messaging app, Slack.


Slack has adapted well and I would argue that it has dominated its ecosystem. There are a few reasons why I believe that it’s earned its position:

  1. It’s easy.
  2. It’s flexible.
  3. It’s not too flexible.

As a tool, Slack is malleable enough to form-fit your communication theories and practices and it does little to dictate them. This means that its utility and its effect are less a factor of the tool and more a factor of the our ability to shape its use.

So when the question was posed, “How well does Slack fit our needs as a team?” I have to admit I wasn’t sure. Days later, in my head, I answered the question with two more questions:

How well have we adapted the tool to us?

How well have we adapted to the tool?

The questions felt somewhat intangible but I had to start somewhere and me being me, I asked the data. I’ll admit I haven’t gotten to the heart of the questions… yet. But I did start to scratch the surface. So let’s step back from the philosophy for a minute, walk through the story, and start answering some questions.

So yeah, we tried Slack… Six months ago

A recently formed, fast moving and quickly growing team, we believed that we could determine own our ways of working. In the beginning, we set some ground rules about channel creation and, believe it or not, meme use (hence the #wtf channel). And that was about it. We promised ourselves that we would review the tool and its use. Then we went for it.

A while later, as I mentioned, a manager pointed out that we had never reviewed our team’s use of Slack. It seemed fine but the questions started to crop up in my head. Me being me, I had a to ask the data.

This all happened about the time that I started to play with Pandas. I didn’t answer the questions but I did get frustrated. Then I read Python for Data Analysis, pulled the data out of the Slack API (which only provides data about channels) and went a bit crazy with an iPython notebook.

To answer my theoretical questions, here are the first questions I had, a few that I didn’t and their answers.

How is Slack holding up over time?

Stacked Time Series
Don’t judge me. This was my first go with matplotlib.

This stacked time series shows the number of post per channel (shown in arbitrary and unfortunately non-unique colors) per week. The top outline of the figure shows the total number of messages for each week. The strata represent different channels and the height of each stratum represent the volume of messages during a given week.

It appears that there is a bit of a downward trend the overall number of messages per week. A linear regression supports that. The regression line indicates that there is a trend of about two fewer messages than the week before.

Linear Regression

If you ask why there appears to be a downward trend in total use over time, I think there a few ways to look at it. First, the stacked time series shows that high volume weeks are generally a result of one or two channels having big weeks rather than a slowing of use overall. This makes sense if you consider how we use channels.

We have channels for general topics and channels for projects. And projects being projects, they all have a given timeframe and endpoint. This would explain the “flare ups” in different channels from time to time. It would also explain why those same channels come to an end.

One way to capture the difference between short lived project channels and consistent topic channels is with a box plot. Box plots represent the distribution of total messages per week for each channel by showing the high and low week totals for a channel and describe the range (Interquartile Range) that weekly message totals commonly fall into.

Slack Analytics Channels Box Plot
Each box plot represents a Slack channel. The Y axis scales to the number of messages in that chanel

For a specific example, the channel on the far left (the first channel created, named #generalofficestuff) has had a relatively high maximum number of messages in a week, a minimum around 1 or 2 (maybe a vacation week) and 50% of all weeks in the last six months fall within about 7 and 28 messages with an average of 10 messages per week.

On the other hand, channels on the right side of the chart, more recently created and generally project-specific channels, describe the “flare ups” that can be seen in the stacked time series chart above. If you wanted to look deeper, you could make a histogram of the distribution of week totals per channel. But that is a different question and, for my purposes, well enough described with the box plot. 

So… how is Slack holding up over time?!

The simple answer is, use is declining. Simple linear regression shows this. The more detailed answer is, it depends. As the stacked time series and box plots suggest, in our case, use over time is better understood as a factor of the occurrence of projects that lend themselves especially well to Slack channels. I know what you’re saying, “I could have told you that without looking at any charts!” But at least this way nobody is arguing.

Projects… What about People?

Another way to look at this questions is not by the “what”, but by the “who.” Projects, and their project channels are basically composed of two components, a goal/topic and a group of people that are working toward that goal. So far we have only looked into the goal but this leaves the question, “are the people a bigger factor in the sustainability of a channel than the topic.

I looked at this question many ways but finally, I think I found one visual that explains as much as one can. This heat map shows the volume of messages in each channel per person. It offers a view into why some channels might see more action than others and it also suggests how project/channel members, and the synergy between them, might affect a channel’s use.

Slack Analyttics Hierarchical Clustering Heatmap
Volume of messages is represented by shade with Users (user_id) are on the Y axis and channels are on the X axis. Hierarchical clustering uses Euclidian distance to find similarities.

What I think is most interesting in this visualization is that is shows the associations between people based on the amount of involvement (posts) in a channel. The visual indicates that perhaps, use is as much a factor of people as the channel’s project or topic, or time.

There are, of course, other factors. We cannot factor out the possibility of communication moving into direct messages or private groups. But again, that is another question and beyond the bounds of this investigation.

So what?

So we got a glimpse at the big picture and gained a pretty good understanding of the root cause of what motivated the question. This is my favorite part. We get to sit back, relax, and generate a few new hypotheses until we run into a new question that we can’t avoid.

What I think is coolest about the findings is that it suggest a few more hypotheses about what communication media our team’s communication occasionally moves to and what media it competes with. Now these investigations start to broach the fundamental questions that we started with!

There are a few things at play here. And the following are just some guesses. It could be that email dominates some projects or project phases because we are interacting with outside partners (people) who, for whatever reason, cannot or will not use Slack. Sorry Slack. It could also be that, due to the real world that we live in, communication is either happening over chat apps like WeChat or WhatsApp.

In either case, we return to the idea of people adapting to tools that are adapting to people. The use of digital communication tools reflects the people who use them and each person’s use reflects the structure and offerings of the tool.

And what’s next?

Hopefully, if you read this you have more questions about this reality and I might (probably) go on to try to answer a few more. I think there are a few interesting ways to look at people are norming with Slack.

Maybe, you are interested in how all this Pandas/matplotlib stuff works because I am too. So I think it will be fun to post the iPython notebook and show how it all works.

Otherwise, it will be interesting to watch how this tool and this team continue to evolve.

Digital Marketing Technical Skills 4/4: How to Read the Docs and Where to Find Help

Getting Started and the Learning Mindset

Being a technical marketer is as much about using technologies as it is about learning and adapting to new ones. The learning mindset may not get its own bullet point on a resume (even though it probably should) but its value increases with the rate of new technology. Lucky for you and me, the learning mindset has nothing to do with education or intelligence – it is solely about determination.

Determination is tested the most in the beginning when you don’t know where to start. There are tons of resources for learning Web and programming technologies but there are far fewer resources geared to learning these technologies for digital marketing. The corpus of technical marketing resources is mostly comprised of blogs, API documentation, open source code repositories and forums. Your search skills will be tested.

I have chased down plenty of rabbit holes and run into many dead ends. From all this confusion I have found a pretty familiar flow of how different types of resources are used. The rest of this post is about how these different resources are used in the ideation, development and execution of technical marketing tactics.

Ideas generally come from one of two places: a problem that needs a solution or the blogs of other creative digital marketing geeks. From there, the idea will generally have to be tailored to a specific use case.

Ideas + API Docs + Code = Execution

Part of applying a good idea to your problem is determining if this idea can be executed. For this, I often turn to API documentation. Whether it is the DOM API or a REST API for a marketing technology, this is the place to find out what the digital medium capable of.

Screen Shot 2015-06-29 at 10.21.07 PMAPI docs describe what the API allows a “client” (a.k.a. a program on your behalf) to do and how the API can be manipulated to do it. Sometimes reading API docs can even broaden ideas by introducing the full set of the API’s capabilities!

API documentation is typically structured in one of two ways, the REST API way or the code library/Software Development Kit (SDK) way. REST API’s docs generally need to describe what resources (structured data) or methods (things the API can do) the API offers. They also need to communicate how these resources can be obtained. These docs usually offer a description of the resource, the location of the resource (URL) and a sample resource representation in code.

A great example of a REST API is Optimizely’s REST API.

Libraries/SDK’s generally describe the “public” methods (methods that are for users of the library not internal workings of the library) that the API offers. These docs communicate what each method does, how the method is called/used and what the options are for using the method. Code samples are usually included to show these details.

Good API documentation that you are likely to come into contact with are Mozilla’s Browser API, jQuery’s API and Google’s various code library’s’ APIs.

GitHub: The Best Thing Ever

When its time to begin developing an idea, a great way to start is to look for code that either accomplishes what you want or gets you the first 80% of the way, Hello GitHub! If you don’t already know about GitHub you better learn quick. It is public code-sharing repository and version control system where you can find code on just about anything! It has enabled almost every great open source project and is one site that truly represents what is great about the Web. It is a great place to start from when launching your solution development.

This is the code you are looking for.

Many web searches will lead to GitHub by default but GitHub also offers excellent advanced search functionality. If you can’t find the code you are looking for there, it may be a rare occasion that the code has either not been written or not been shared yet. When you find a repository that looks like what you are looking for, there are a few things you need to know to make sense of what you have found: 1. A repository represents a file directory and each file in the directory can be viewed separately. 2. The readme.md (markdown file format) generally offers a description of what the code repository has to offer and how it is used. 3. There are metrics associated with the repository that are generally an indication of the popularity and overall use of the repository. (More popular generally means more stable/trustworthy code)

StackOverflow: The Duct Tape of the Internet

When your brilliant idea looks executable and code is on the screen, there is still a way to go to complete you technical solution. You will probably find some rough edges and interfaces that don’t connect the way you want them. When these bumps in the road arise, the best place to go for quick problem-specific answers is StackOverflow. This site has solved more problems than duct tape, bubble gum and elbow grease combined. And like GitHub, StackOverflow is another “best of the Web” site in my opinion.stackoverflow-logo-1-624x195StackOverflow is question answer forum site for programming questions. It rewards its users for asking great questions and providing great questions. This, in turn, increases the knowledge of everyone. Another benefit of the forum format is that many questions will receive several legitimate solution, so the likelihood that you find a solution that meets your needs is far more likely than not.

To use StackOverflow, you will generally start with a search. Googling: [error message] [programing language or application] will often land you on a StackOverflow page. But because the site is full of natural language question and answer, Googling a specific question will also turn up solid results from the site. In the beginning, if you are unable to find what you are looking for through Google searches, use StackOverflow’s excellent advanced search functionality there is a lot in there. In my experience, as Google starts to recognize that I have favored StackOverflow search results, I find that they are provided more often.

Finding answers on StackOverflow can be daunting at first but you just have to know what you are looking for. The /questions/ pages all have the same format which you will get familiar with.

The heading is the question or problem description that was originally submitted and directly below that is a detailed description of the problem. Below that are the responses. Users rate questions and responses based on their quality and how helpful they are. Scores reflect their popularity and generally reflect best practices. But note that if you don’t love the first answer, keep looking at answers below because you might find an answer that points you in the right direction. Also pay attention the date of the question and make sure that both the question and answer are both still relevant to your use case.

Finally, I have to mention Quora; part because the amateur philosopher in me loves is but also because it is great for asking higher-level questions. This is a great place to hear from seasoned pros and is a perfect place to discuss philosophical matters like if one approach is better than another. Google forums, Google+ groups and Meetups are also a good place to interact with people who are solving the same technical problems as you.

With all these resources, it comes down to a matter of time and determination. Never give up if a solution seems too difficult. Keep searching and keep learning. Learning the technical stuff is all about momentum.

Dynamically Pre-fill Google Forms with Mailchimp Merge Tags

Why? Because, connect all the things! Why else? Because the less information people have to fill in to your form, the more likely they are to complete it – especially if you are asking them to fill in information that they will have to look up! Dynamically pre-filling forms is a great conversion optimization hack to show your prospective respondents a little love. This love will increase form completion rates and response accuracy.

The question originally came to from someone who had read my post about Mailchimp Reporting with Google Spreadsheets (same products but different applications) But hey, sometimes you don’t even know where to start your Google search! His question was this:

I send a reminder email to my customers through MailChimp. The email contains the customer’s account code and some other data unique to the customers. A link on the email sends the customers to a Google Form where they will answer some questions. I want to have the customer’s unique customer code populate a field so I that I know  who the response came from, and in doing so, reduce the number of fields a customer would have to complete.

Do you have a solution for this? MailChimp and Google apps said there isn’t, but Google did suggest it might be resolved with a script (which I have no idea how to do).

It seemed interesting. I thought about the script. I was hoping this would not be the answer because that sounded like trying to breed an ostrich and an alligator. Then I remembered a project I had worked on in tracking Wufoo form submissions in Google Analytics. That was an Ostrigator but hey, it worked!

The key was passing data around on query strings. I knew there was dynamic values in Mailchimp in the form of Merge Tags. Then all we needed was a link to Google Forms that would allow you to pre-populate the forms. Lo and behold: Pre-populate form answers! So all we would have to do is match the right merge tags as to the right form URL query string parameters and we would have pre-populated forms from the dynamic values in the emails. #masscustomize all the things!

Merge Tags and Google Form Links

Step 1. Get a Pre-Filled Google Form URL

To get started, go to your Google Form editing page and click responses. Select “Get pre-filled URL”

Hubdango   Google Forms

Step 2. Pre-fill the Form with Merge Tags

Find the merge tags that you want to use and enter them into the form boxes. *When you do this, make sure to use merge tags that are accurate for your whole mailing list. Missing information is not a problem but if your merge tags are inaccurate, this could cause confusion and/or complaints.

User Validation

Step 3. Copy the Pre-filled Google Form URL

The merge tags have been appended to to the form’s URL as query string parameters. Now you have your link to your Google Form that will automatically fill the values of your form with the values of your Mailchimp Merge Tags.
User Validation 2

How Query String Parameters Work

At this point, you may be wondering how this works. You have a URL that looks like this:

followed by this:

The part after starting with with the question mark (?) is the query string. It is made up of key-value pairs connected with ampersands (&). Query strings are used to pass information to webpages or “resources.” The server that handles the request will know what the query string’s keys mean and depending on each key’s value, will dynamically generate or modify the resource. (See how query strings works with a API’s) In this case, Google’s server just takes the value of each secretly encrypted key and places the value in the form field associated with that key.

But why does the ‘First Name’ Merge tag look like *%7CFNAME%7C* ?

You have seen strings on the internet, often on URL’s that have something like “%20″ in them. This is called URL encoding. It is one of those web standards that allows the magic of the web to work. Because the pipe character (|) is not a normal character it is encoded after the percent symbol as “%7C” .

Don’t worry when you paste this into your Mailchimp email, Mailchimp will automatically decode the URL and it will end up looking like this:

Now go answer every other question you have ever had about URLs.

Now that you’re back, go setup your new email campaign.
Mailchimp Merge Tags in links
Then when your email recipient and prospective form respondent gets their Mailchimp email, the link will be dynamically modified to look like this:

and… Hello World! The Mailchimp has correctly rendered the links with the dynamic values from the merge tags.
Rendered links with Mailchimp Merge Tags

Why Digital Marketing is Science + Art

When it’s all said and done, an appealing email subject like and effective copy will only get respondents to go to the form page. After that, their is a lot left to optimize. As digital marketers, we are doing much more than trying to persuade. We are delivering a message, removing friction and optimizing experiences. We are measuring and testing all the way! Don’t forget, our job is just as much about perfecting the message as it is the medium.



Track Web API’s with Google Analytics’ Measurement Protocol

Google Analytics got a whole lot more interesting when the Measurement Protocol was introduced. We already knew GA was the industry standard for web analytics but with the Measurement Protocol it has become the analytics platform of anything and everything that can be made digital. With some clever instrumentation, we can now use it to track products through the supply chain or track users interactions in a store. All you need is a way to collect digital data and send HTTP requests to Google Analytics and you can track anything.

I had to try it out for myself. While I could have fitted #rhinopug with a tracking device or instrumented my coffee machine with an Arduino, I took the easier (but equally cool) route to getting data: a Web API. As my proof of concept, I chose to track the SwellPath team’s group chat application called GroupMe.

Google Analytics Measurement Protocol
GA Dashboard Courtesy of Mike Arnesen

Tracking a chat app turned out to be a pretty cool way to walk that physical/digital line. While we are humans working in the same office, its interesting to compare contextual information from what we can see and hear to the very objective measure of communication; desktop and mobile messaging. This concept is similar to other measures of digital communication like Twitter firehose or brand mentions from news API’s. Those are probably much more relevant to, and could actually affect a website’s performance but, let’s be honest, this one’s a lot more fun.

Mapping Data to Google Analytics

Digital messaging is actually pretty appropriate for the Google Analytics reporting interface. The main reason is this: timestamps. We rely heavily on timestamps to analyze everything in Google Analytics which are all time-based hits. We ask Google Analytics how different landing pages perform as seasons change and what time of user’s are most likely to convert (in order to bid intelligently on ads). Likewise, there is also a natural rhythm to work-based communication. Of course, (or hopefully) its pretty quiet on the weekends and generally pretty active as people start each workday.

The other reason that human communication maps to the Google Analytics reporting interface is that message creation is a lot like content consumption. When we really think about what a “hit” schema looks like, it has a few entities what go together something like this:

[actor] did [event] on [location] at [timestamp]

This “hit” schema works equally well for describing message creation as it does content consuming.

With every hit, the [actor] a.k.a. User is assigned some attributes like Device or New/Returning and the [event] a.k.a. Event, Pageview or otherwise, will have attributes like URL and  Page Title for Pageviews or Action and Label in the case of Events. The [location] is an interesting one. For web, its the page that the user is browsing but it’s also the physical location of the user a Lat,Lon pair with appropriate geographic information. The [location] attributes are generally handled by Google Analytics automatically but speaking from experience, the real art of a good collection strategy is mapping the right information to the right attribute of each entity.

To make sense of the idea of mapping information to attributes let’s get back on track and talk about GroupMe. It boils down to this: you have data and you want it to appear in Google Analytics in a way that you can logically sort/filter/analyze it. This is where the mapping comes in.

GroupMe’s API gives you data about a group’s messages like this:

If this doesn’t make sense to you, go read up on JSON. But essentially what you get when you ask the GroupMe API for the most recent messages, it returns a list of messages with, among other things, the sender’s name and user ID, the message, the number of likes, and the location. So we have information about each of the “hit” entities. The user, event, place and time are all described. The only thing missing that is critical to web analytics metrics is something similar to Page. For that reason I decided to use Google Analytics Events to describe each GroupMe message. Each hit maps GroupMe data to Google Analytics as follows:

Google Analytics Parameter GroupMe Data / JSON Keys
User ID GroupMe User ID / user_id
Client ID GroupMe Source GUID / source_guid
Custom Dimension (User) GroupMe Username / name
Event Category “GroupMe Chat”
Event Action “Post”
Event Label Truncated Text of Message / text
Event Value Count of Likes / count(favorited_by)
Queue Time Difference between Now and Timestamp /current time – created_at

Then each GroupMe message is sent to Google Analytics on an HTTP request with data mapped to GA parameters as shown above. Collect data for a few days and then it looks like this:

Measurement Protocol Specific Values: Queue Time and Client ID

If you come with a Web analytics frame of mind, there may be two things that are unfamiliar to you: Client ID and Queue Time. These are both a pain to get right but functionally awesome.

The Client ID is something you don’t have to think about for web data collection; it’s automatically collected from a cookie that Google Analytics sets for you. It is very important though. It is the key for differentiating two devices that, by their collectible attributes “look” the same but are not. The CID must follow very specific rules to be valid and lucky for me, GroupMe offers a GUID for each message that fits the specifications.

Queue Time is awesome. This is the single most important factor in getting the time value of at Measurement Protocol “hit” right. It is the delta (a cool way to say difference) between the time that the event occurred and the time that the hit was collected. If you send the hit to Google after the hit took place, Google’s servers calculate the time delta and record the hit at the time that it actually took place.

This was especially important for the method I used to get data from GroupMe and send it to Google Analytics. Because I was only getting the messages from the GroupMe API once an hour. Without the Queue Time, the hit timing would be very low fidelity, with spikes each hour when the data was collected and sent. By calculating the Queue Time when each message was sent, I got accurate timing and didn’t have to worry about burning through API limits or wasting lots of HTTP calls. (Think about it, without Queue Time, your data is only as accurate as the frequency that your hits are sent which was a cron job in this case.)

Google Analytics Measurement Protocol API
Don’t call it a hack. Ok, call it a hack.

Lessons Learned / How I’d Do it Next Time

This ended up working out pretty well thanks to a fair amount of luck and plenty of read the docs, code, debug, repeat. I got lucky when I realized I hadn’t accounted for things like the mandatory Client ID parameter and … the fact that my server doesn’t run Python cron jobs. As a result I ended up writing my first PHP script and here I am sharing 100-some lines of amateur code. But hey, this proof of concept works!

If I were to do this again, I would answer a few questions before I started:

Get to know the API

  • Will the API I want to track give me all the data I need?
  • Are events timestamped or do I have a way to approximate that?
  • How difficult is authentication and how long does it last for?
  • Am I going to operate safely within the API rate limits?
  • What about Terms and Conditions of the API data?

Map the Data to Google Analytics

  • How will I avoid making recording the same hit twice?
  • What type of Google Analytics Hit will I use?
  • How should I map the API’s data to a Google Analytics hit?


  • Can I write some code to automate this?

How the Code Works

The code I wrote to automate this is listed below but if you are unfamiliar with PHP or code in general the instructions that are given to the computer are essentially this:

It was a fun project and luckily a successful proof of concept for tracking non-website data in Google Analytics. If you’re thinking about doing a Measurement Protocol project, leave a comment or tweet me at @realtrevorfaux (don’t worry, I’m not tracking it). If you’re interested in other cool ways to track offline transactions, check out Google Analytics Enhanced Ecommerce, I really look forward to what is to come of the Measurement Protocol with things like IOT. Connect, collect, and analyze all the things!

The PHP code that I used is below. Give me a break, this is the first PHP (and maybe last) I’ve ever written.


How to Harness the Power of API’s for Digital Marketing

Part III of $10k Tech Skills for Digital Marketing

You could be so much faster! Right now, you are using your browser to view HTML pages. You go to a page, get some information and move on. Imagine how much more information you could gather if you could tell your browser to look at 10 pages per second, compile it all and give you the executive summary. This is the power that API’s and a little bit of code provide.

Powerful Data Meme

API’s are your key to the Web at scale. An API (Application Programing Interface) allows you to interact with services like social networks, and marketing, advertising, or analytics platforms. With a bit of code or the right tools, API’s can be used to do almost anything that a digital marketer does manually. The real power of marketers who can program comes from using API’s to programmatically execute marketing tactics that would be otherwise impossible for humans to do at the same speed.


Painless Reporting

“I love creating the same report every month”  No one ever.

Reporting is part of being-data driven but unfortunately, it sucks. It is time consuming and takes your valuable time away time away from creating or optimizing marketing strategies. API’s are a very elegant answer to recurring reports. If you have to get data from several places just to format it into a spreadsheet or charts, let APIs help. Google Spreadsheets and Google App Script offer a friendly API to allow you to populate spreadsheets and charts with data programmatically. Just make a call to your analytics platform or email marketing platform or your advertising service’s APIs to create up-to-date dashboards.

Automation: The Key to Growth Hacking

There are also many other tasks from response emails when users sign up to keyword research using Google’s auto-suggest to PPC bidding with Google Adwords that can be automated using API’s.

Keyword Research? Ain't nobody got time for that!Even social media management can be automated using API’s (think of that annoying “growth hack” that people try when you tweet to hashtag and they favorite it or add you to a list 3 seconds later.) I am not saying it should be done, just that it can be done.

All of these digital marketing tasks have one thing in common: they have an input and depending on what that input is, they have a slightly different output. These tasks are perfect candidates to be handled by algorithms. Write the instructions in code to decide when/how these things are done and your work time becomes a lot more efficient and your work will scale infinitely. Think of this like IFTTT (which is all about interconnecting API’s with some logic) for digital marketing.

Web Scraping: Research at Scale

Web scraping is very similar to interacting with API’s but in this case, the API is a collection of HTML pages or files with consistent formatting. Take for example, the indeed.com trend graph images that I have been using; depending on the URL input, their server will respond with a different image. Try it yourself (change {keyword} to a job or skill):


If you were looking for the newest trending skills, instead of searching their website over and over, you could make a hundreds of requests to their /jobgraph URL extension and you could quickly scan the images to see what skills you need to learn. That might be a trivial example but you get the idea!

As you can see from the last example, APIs can be used to speed up processes at any scale but are absolutely essential to large processes.

Hacking Social for Fun and Profit

Exploring API’s can offer new and creative ways to market your product or service. One especially remarkable tactic that employs the Instagram API was just started by Marais Shoes last year. Marais uses the Instagram API to check, in real-time, for comments on their images that contain the hashtag “sold.” If the comment author has signed up on their site using Instagram. The member is emailed a link to their shopping cart with the featured product conveniently in their cart. This brilliant tactic would not have been possible without the combination of marketing savvy and an understanding of API technology.


Finally, In order to know what an API is capable of, a marketer must know what data can be interacted with and how it is transmitted. This is where transfer protocols and data structures get involved. But don’t let the language scare you, at an operational level these are not that complicated! The large majority of API’s these days are very user friendly thanks to REST (REpresentational State Transfer) and JSON (JavaScript Object Notation) formatted data. JSON is very easy to understand once you try. It has a very simple set of rules about how data is organized and formatted. These rules are much like MLA formatting for research papers (except much simpler). The consistency allows us (computers and humans) to easily understand information because we know what formatting to expect.

Here is an example of JSON formatted data. It comes from the Facebook API and is simplest representation of me in the eyes of Facebook. See, not so bad!


Learning How to Use REST API’s

A great way to build an understanding of REST is using a cool tool called PostMan. PostMan is a Chrome App that acts as a “client” to inspect what data is available in what way from different API’s. You enter in the URL of the service you want to work with and PostMan will bring back the data and format it in a way that is easy to understand. From there you can write a little big of Google Apps script or server side code and automate the whole process.

Another cool tool is cURL. This is the nerdy grandfather of PostMan that lives in the Bash Shell. It works in the same way as PostMan except you enter the URL in the command line and then the data is printed to the screen. cURL is definitely as user-friendly as PostMan but it is good to be familiar with because many API’s and services refer to cURL in their documentation. Spend one hour learning cURL. It will save you loads of time banging your head into API error responses.

For more on learning how to you API’s, the next post explains how to make sense of API documentation, how to find the data and make the data work for you.

How to Make a RESTful API or Service with Google Apps Script

You know what’s cooler than calling API’s and services with Google Apps Script? Making your own API with Google Apps Script. After playing around with my first REST API with Python and Bottle, I wondered how this could be done even cheaper and quicker; without worrying about the complexities of databases and hosting.

When it’s simple and easy you are looking for, the answer is usually Google Apps Script. REST API’s and web services can be written in Google Apps Script in a matter of a few lines. It can then be easily deployed to the pubic as a web app. This post shows how to make a RESTful API for interacting with a Google Spreadsheet.

Planning Your REST API

First we’ll have to have an application, In this case we are going to use a Google Spreadsheet with a product list as our application. We are going to allow clients to retrieve data from the spreadsheet.Google Spreadsheet

Then we’ll have to decide how people can access and interact with the data from our application based on a URI schema. There is some limitation to request URIs with Google Apps Script. Namely, you cannot use URI extensions like “/products/get/” or “/products/list/”. Instead, everything must be done with query string parameters.  In this case our REST service only has one method so we will be using the following schema to get information about the product:

For great insight into forming you URI schema see this StackOverflow question.

The API could also have methods to add or delete products and update quantities but for simplicity, I will let you think about what that looks like.

Finally, we will have to decide how data is structured when it is retrieved from the API. In this case I will be using JSON, but it could be XML or Plain Text if you wanted. The Google Apps Script ContentService class has methods for formatting the data output in any way that you choose.

For a much more in depth at designing an API, I would recommend reading these best practices.

Making the RESTful API or Service

doGet(), doPost() and URL Parameters

The most import things to understand are the Google Apps Script doGet() and doPost() methods. These methods take, as an argument, the request URI and decide what you application does with the request. For simplicity, version 1 of our API only accepts GET requests, so there is no need for the doPost() method. But you should know that the doPost() method does have some additional functionality.

When doGet() receives the URI, the Google Apps Script turns it into a request object that, to your application, looks like this:

Request URL:

doGet request Object:

For more information about the request object, see https://developers.google.com/apps-script/guides/web#url_parameters.

*Its important to note to things: first is the difference between the parameter and parameters array. The parameters array returns a list of all prodid queries while the parameter object only shows the first in the URI and second is that you can use the query string to change your request, but you cannot, unfortunately, use variations the URL extension.

Application Logic and Plumbing

Now that the request URI is neatly formatted into a JavaScript object, all we have to do is translate that into a query for our spreadsheet so that we can return a JSON formatted response to the client. If this is a service, it may be, that all you have to do is run the request object through some logic or an algorithm and return the result. In this case I used two helper functions to interact with data in a Google Spreadsheet.

The function, productQuery, takes a product ID as input and return the row corresponding to that product ID. The function formatProduct, takes a data from a spreadsheet row and turns it into an object with spreadsheet headings as object keys.

 Returning JSON Data

Now that all the intermediate logic is taken care of, all we have to do is write the doGet method so that it takes the request URL and returns our JSON formatted product data.

Now that everything is connected. The request to


The next step could be to add some kind of functionality that would allow the client to add a product as a line to the spreadsheet using a different “action” parameter. I’ll leave that to you to explore.

Testing and Debugging

REST service is a bit different than writing a normal script in that the debugging feedback loop is a bit longer. Any time you want to change your code and see how it is really working, you will have to:

  1. Save the script
  2. Create a new version of the App
  3. Deploy the App

To deal with this I found that it was a lot quicker to create a test wrapper function that would call the doGet function with a fake request URL object (shown above). This way you can use the Logger for quick inspection in to your code.

When you’re ready to test your code in action. Try out PostMan, an HTTP client that is great for building queries and viewing the response.

Deploying your REST Service

The most import thing to remember in publishing your API is that the permission must be set to “Everyone (even anonymous)”. Otherwise your client will not be able to access your service unless it is a Google user. Otherwise, all you have to do is follow the three steps of Testing and Debugging. For more see the Google Apps Script documentation.

*Important Note: Because requests to Google Apps Script Web Apps return data from a 302 redirect URL, any client that will be accessing the service, must be capable of following redirects.

Otherwise Google Apps Script provides a quick, cheap and easy way to create simple REST API’s and REST Services. View the full Script on GitHub or make your own! Planning it out will give you a great idea about how to interact with REST services as a client.

Learn Programming and Databases for Digital Marketing | $10k Tech Skills 2/4

This is part three in the $10k Technical Skills for Digital Marketing Series. Part one introduced the importance of learning client-side technologies and offers a plan to learn Javascript, HTML and CSS for digital marketing. This post broadens the picture by introducing server-side programming and databases, which together compose web applications. Understanding how web applications work is a major benefit and should be essential knowledge for digital marketing. Enjoy!

Learning How Web Applications Work

From Google Bot to the Facebook Social Graph, to this WordPress blog; the web as we know it, is a massive system of interconnected applications. All these applications are simply programs and databases that run on servers. And while building these applications is a massive undertaking, learning the underlying processes and concepts is not. It takes nothing more than a bit of effort and time to learn enough about programming and databases to significantly set yourself and your resume apart from the average digital marketer.

While the benefits of learning how to write server-side code and interact with databases are not as immediately useful as many of the skills listed in Part 1, it is actually the process of learning this skill that presents the real value. The learning process will provide and intuition about how applications work and how processes can be scaled. This is key to digital marketing at scale.

If you can understand how search engine bots crawl websites, you can understand what makes a website crawl-friendly and you begin to understand the technical aspects of SEO. If you understand how algorithms work, you can understand Edge Rank and how Facebook decides to distribute content and broaden your reach. If you can understand how your CMS works you can map your analytics platform to it and gain better insight, which you can then use to, automate processes like email and offer personalized experiences. This new intuition about the web will continue to present opportunities.

You will also find many practical opportunities to employ your new programming and databases querying skills for digital marketing tasks and processes. While these skill starts to bleed into the realm of web development and data-science/business intelligence there are still many applications for server side scripting languages, from automation to optimization that can be very powerful for digital marketers.

Programing for the Web

When starting out on the road to learning server-side scripting, it is most realistic to start with PHP, Python or Ruby on Rails. All three are open-source, have strong communities and plenty of free learning resources. They all offer many similar advantages but each is powerful (and practical) in its own way.

programming languages for digital marketing
You see why I chose python…

PHP, for better or worse, has been the defacto server-side language of the web for a long time. PHP is what powers WordPress, Magento, ModX and many other content management systems (CMS’s) and if you are in digital marketing for long you will likely run into at least one CMS powered by PHP. Learning PHP will come in handy when you find yourself wanting to add schematic markup for search engines or scripts for testing or analytics platforms like Optimizely or Google Tag Manager.

Depending on the site(s) and development resources (or lack thereof) that you are planning to work with, PHP may be good choice. It is the easiest code to deploy, as all popular web servers will support PHP.

Python is also used to build websites with frameworks like Django and Flask but more often, sites that are built with Python are apps built with a specific, custom purpose. Unlike, PHP and Ruby, which are designed for, web development; Python is a general-purpose language, which makes it go-to languages for data-science. (The resources featured here are most about how to learn python as that is the language I have focused learning the most. It has been great!)

For the technical marketer, Python is useful for scaling big(er) data science-y processes like web scraping, querying API, interactive analysis and reporting. Many processes that are carried out manually can be programmed using Python and run on a cron job or other triggers. One major benefit of Python is that it is so easy to learn thanks to the number of educational resources and friendly syntax. If you find yourself venturing into the world data science, you will be well prepared with Python as a large and active data science community supports it.

Ruby on Rails, well, I really haven’t played with it much but I have heard it’s very nice. The key, I hear is that it is good for rapid webapp development.

Node and JavaScript were much of the focus of Part 1.

Database Querying and Analysis

Digital marketing without data is not digital marketing and the digital marketer who is not data-literate is just a marketer. I am not arguing that all digital marketers should be become SQL ninjas but learning this skill, like programming, is as much about gaining an intuition about how systems and applications work as it is about developing a practical skill.

databases and analtyics

For a real-world use case that employs this skill as both intuition and a practical skill, look no further than Google Analytics. The Google Analytics web interface is ‘simply’ an elegant way to query, sort, filter and visualize site usage/performance data that is collected in a database. Having a general understanding of how Google Analytics stores data and how different data points/hit types interrelate allows you to be much more precise in your analysis and confident that the data that you pull from Google Analytics is accurate.

SQL knowledge can also help you in times that you need to pull raw data out of Google Analytics for further analysis or to avoid sampling. With Google Spreadsheets’ QUERY function, you can query spreadsheet data using SQL (Structured Query Language). For quick analysis and more complex inspection of data sets, writing SQL queries to explore and form data to your needs can be much quicker and easier to debug than writing a successive set of spreadsheet functions.

When dealing with large amounts of Google Analytics and sampling becomes a significant issue, Google’s BigQuery can be hooked up to Google Analytics to provide SQL-like query functionality with greater speed and scale. When you become comfortable with this GUI-less interface, the ability to query any database become much less daunting. You can then answer question by directly querying databases such as a website’s MySQL database using phpMyAdmin.

“Every question can be distilled into a database query,” Adam Ware of SwellPath told me when I first started learning about databases. The phrase seemed very exciting and has since proven accurate. I have come to realize that databases simply hold all the raw information in a defined structure. By asking the right question in the right way, your digital marketing insights are limited only by your data.

Once you start to understand how databases operate you will notice their appearance in apps across the web from ecommerce stores to analytics platforms to blogs. The understanding of how data is stored and how to extract the data that you want will also significantly improve your ability to use applications to their full potential, ideate optimization for existing apps and learn new applications. This intuition is skill that helps turn data into to knowledge and as you knowing is half the battle.

How to Learn Web Application Programming

Start Here: CodeAcademy.com

This is a great place to start with any web programming language. It is the quickest, easiest and most fun way to get up to speed with a programming language that I have found. Best of all it is free. It offers courses in PHP, Python and Ruby and hosts very helpful Q&A forums for coders who are just starting out.

Get up to Speed: Intro to Programming with Python (Udacity)

Once you have gotten a feel for programming (and a few bumps and bruises to go along with it) the next place to go is to start to understand the real power that programming offers. Udactity’s Inro to Programming in Python picks up where CodeAcademy.com leaves off and introduces capabilities rather than just syntax and style.

For the digital marketer, this course is especially useful because the course is taught through constructing a very rudimentary search engine crawler (or at least the general idea of one). This application opens a window of understanding how big applications work and will make you think differently about how search engines operate.

How the Web Works: Web Development (Udacity)

There is a lot more than just programming that differentiates marketers who can program from web developers. From hosting, to caching to cookies, this course does a good job introducing these concepts.

From my experience, it was a bit too difficult as a follow up from the Intro to Programming in Python course to actual create and deploy a web app, but it does give a substantially understand of technical web terminology to communicate effectively with web developers. (This is a very valuable skill if you ask me.) From this course you will have an understanding of what topics you need to take on in detail to accomplish what you need to do as a technical marketer.

How to Learn Data Analysis with Databases

Become Data-Driven: Intro to Data Science (U. Washington & Coursera)

In my opinion (and I am a bit of a biased data-geek), this is the best online course I have taken. Each lesson offered “aha!” moment after “aha!” moment while teaching really useful skills.

The course assumes only a bit of Python experience and offers a comprehensive introduction to everything from interacting with API’s with Python and to querying databases from the command line to how to think and communicate with data. Taking this course will make any digital marketer more data-driven and will back them up with the skills to take action.

Database Deep Dive: Introduction to Databases (Stanford & Coursera)

Slightly more academic than Intro to Data Science, this course provides a very strong foundation for understanding data and databases. If you are a “why does this work” type of person, this course will be very interesting.

From a practical standpoint, the course offers very good lessons on JSON and XML formats which are everywhere in digital marketing and their understanding is essential for working with API’s. The database portion of the course will take you at least as far as you will need to go for the digital marketing applications of databases.

Put it all Together: MongoDB University

If all these courses have been interesting to you and you have a good handle on programming, then this is the course for you! You will build a real webb app from the ground up while learning MongoDB hotness. Another digital marketing specific benefit to this course is that the app that you build is a blog. Understanding how blog content is retrieved and presented will help you understand a lot about semantic SEO.

I hope you have at least one direction that you are excited about. Leave a comment if you have any questions or follow the rest of the series by signing up for email notifications when new posts are up. API’s, web scraping and “how to learn” are still to come!

How to Opt-Out of Optimizely (cdn.optimizely.com) in One Click

What does “waiting for ‘cdn.optimizely.com’ mean?

In short, ‘cdn’ means Content Distribution Network and Optimizely is a service that provides A/B Testing for websites. Optimizely makes A/B testing possible by swapping images or changing the HTML or CSS styling of a web page. To do this, it has to load additional information on to the webpage being tested.

The reason you are seeing “waiting for cdn.optimizely.com” as you are trying to load a site is most likely because the network you are using is somewhat slow. It just happens to be Optimizely information (likely an image page content for the A/B test) that is coming across the network while you are waiting on a slow network connection.

How to Opt-Out of Optimizely in One Click

The quick solution is to create a bookmarklet to opt out of Optimizely for any page that you don’t want Optimizely to load on. To do this:

  • Create a new bookmark in your browser
  • Instead of adding in a URL add in the following:

  • click the bookmarklet when loading sites that are using Optimizely.

Done! You’ve successfully opted out of Optimizely. You should no longer have to wait on cdn.optimizely.com. This should remain in effect until you clear that site’s cookies.

Optimizely is not Bad!

(In fact is pretty awesome)

The reason I wrote this is because a friend tweeted their frustration about waiting on “cdn.optimizely.com.” The unfortunate part is that while Optimizely appears to be the culprit of a slow-loading webpage, its actually far more likely that a slow network connection is to blame.

You probably like Optmizely but you just didn’t know it. Or until now, you didn’t even know about it. Optimizely is used for A/B testing on tons of sites that you visit, from cnn.com to ehow.com. A/B testing is done to improve sites and provide a better user experience for people like you. And 99.999% of the time you don’t notice it because it is making your life better. Only in this rare occasion is it bothering you. (And its because of your network)

How The Bookmarklet Works

The bookmarklet issues some instructions, coded in to javascript, to the browser.

This basically says the following:

“Here comes a javascript function”

“Take the current URL (aka window.location) and append to it the parameter to opt out of Optimizely.” This is explained further on optimizely.com.

“Do these instructions! When the URL is changed, the browser reloads the page with the opt out parameter. This tells Optimizely that no tests should be run on the page and should not load any additional images or information on to the page.” (And you won’t have to wait for it)

I hope this works out for you. Just remember, you don’t hate Optimizely, you hate slow internet connections!