SEO with the Google Search Console API and Python

The thing I enjoy most about SEO is thinking at scale. Postmates is fun because sometimes its more appropriate to size opportunities on a logarithmic scale than a linear one.

But there is a challenge that comes along with that: opportunities scale logarithmically, but I don’t really scale… at all. That’s where scripting comes in.

SQL, Bash, Javascript, and Python regularly come in handy to identify opportunities and solve problems. This example demonstrates how scripting can be used in digital marketing to solve the challenges of having a lot of potentially useful data.

Visualize your Google Search Console data for free with Keyword Clarity. Import your keywords with one click and find patterns with interactive visualizations.

Scaling SEO with the Google Search Console API

Most, if not all, big ecommerce and marketplace sites are backed by databases. And the bigger these places are, the more likely they are to have multiple stakeholders managing and altering data in the database. From website users to customer support, to engineers, there several ways that database records can change. As a result, the site’s content grows, changes, and sometimes disappears.

It’s very important to know when these changes occur and what effect the changes will have on search engine crawling, indexing and results. Log files can come in handy but the Google Search Console is a pretty reliable source of truth for what Google sees and acknowledges on your site.

Getting Started

This guide will help you start working with the Google Search Console API, specifically with the Crawl Errors report but the script could easily be modified to query Google Search performance data or interact with sitemaps in GSC.

Want to learn about how APIs work? See: What is an API?

To get started, clone the Github Repository: https://github.com/trevorfox/google-search-console-api and follow the “Getting Started” steps on the README page. If you are unfamiliar with Github, don’t worry. This is an easy project to get you started.

Make sure you have the following:

Now for the fun stuff!

Connecting to the API

This script uses a slightly different method to connect to the API. Instead of using the Client ID and Client Secret directly in the code. The Google API auth flow accesses these variables from the client_secret.json file. This way you don’t have to modify the webmaster.py file at all, as long as the client_secret.json file is in the /config folder.

try:
    credentials = pickle.load(open("config/credentials.pickle", "rb"))
except (OSError, IOError) as e:
    flow = InstalledAppFlow.from_client_secrets_file('client_secret.json', scopes=OAUTH_SCOPE)
    credentials = flow.run_console()
    pickle.dump(credentials, open("config/credentials.pickle", "wb"))

webmasters_service = build('webmasters', 'v3', credentials=credentials)

For convenience, the script saves the credentials to the project folder as a pickle file. Storing the credentials this way means you only have to go through the Web authorization flow the first time you run the script. After that, the script will use the stored and “pickled” credentials.

Querying Google Search Console with Python

The auth flow builds the “webmasters_service” object which allows you to make authenticated API calls to the Google Search Console API. This is where Google documentation kinda sucks… I’m glad you came here.

The script’s webmasters_service object has several methods. Each one relates to one of the five ways you can query the API. The methods all correspond to verb methods (italicized below) that indicate how you would like to interact with or query the API.

The script currently uses the “webmaster_service.urlcrawlerrorssamples().list()” method to find how many crawled URLs had given type of error.

gsc_data = webmasters_service.urlcrawlerrorssamples().list(siteUrl=SITE_URL, category=ERROR_CATEGORY, platform='web').execute()

It can then optionally call “webmaster_service.urlcrawlerrorssamples().markAsFixed(…)” to note that the URL error has been acknowledged- removing it from the webmaster reports.

Google Search Console API Methods

There are five ways to interact with the Google Search Console API. Each is listed below as “webmaster_service” because that is the variable name of the object in the script.

webmasters_service.urlcrawlerrorssamples()

This allows you to get details for a single URL and list details for several URLs. You can also programmatically mark URL’s as Fixed with the markAsFixed method. *Note that marking something as fixed only changes the data in Google Search Console. It does not tell Googlebot anything or change crawl behavior.

The resources are represented as follows. As you might imagine, this will help you find the source of broken links and get an understanding of how frequently your site is crawled.

{
 "pageUrl": "some/page-path",
 "urlDetails": {
 "linkedFromUrls": ["https://example.com/some/other-page"],
 "containingSitemaps": ["https://example.com/sitemap.xml"]
 },
 "last_crawled": "2018-03-13T02:19:02.000Z",
 "first_detected": "2018-03-09T11:15:15.000Z",
 "responseCode": 404
}

webmasters_service.urlcrawlerrorscounts()

If you get this data, you will get back the day-by-day data to recreate the chart in the URL Errors report.

Crawl Errors

 

 

 

webmasters_service.searchanalytics()

This is probably what you are most excited about. This allows you to query your search console data with several filters and page through the response data to get way more data than you can get with a CSV export from Google Search Console. Come to think of it, I should have used this for the demo…

The response looks like this with a “row” object for every record depending on you queried your data. In this case, only “device” was used to query the data so there would be three “rows,” each corresponding to one device.

{
 "rows": [
 {
 "keys": ["device"],
 "clicks": double,
 "impressions": double,
 "ctr": double,
 "position": double
 },
 ...
 ],
 "responseAggregationType": "auto"
}

webmasters_service.sites()

Get, list, add and delete sites from your Google Search Console account. This is perhaps really useful if you are a spammer creating hundreds or thousands of sites that you want to be able to monitor in Google Search Console.

webmasters_service.sitemaps()

Get, list, submit and delete sitemaps to Google Search Console. If you want to get into fine-grain detail into understanding indexing with your sitemaps, this is the way to add all of your segmented sitemaps. The response will look like this:

{
   "path": "https://example.com/sitemap.xml",
   "lastSubmitted": "2018-03-04T12:51:01.049Z",
   "isPending": false,
   "isSitemapsIndex": true,
   "lastDownloaded": "2018-03-20T13:17:28.643Z",
   "warnings": "1",
   "errors": "0",
  "contents": [
    { 
    "type": "web",
    "submitted": "62"    "indexed": "59"
    }
  ]
}

Modifying the Python Script

You might want to change the Search Console Query or do something with response data. The query is in webmasters.py and you can change the code to iterate through any query. The check method checker.py is used to “operate” on every response resource. It can do things that are a lot more interesting than printing response codes.

Query all the Things!

I hope this helps you move forward with your API usage, python scripting, and Search Engine Optimization… optimization. Any question? Leave a comment. And don’t forget to tell your friends!

 

Groucho Test

The content

Google Analytics to Google Spreadsheets is Data to Insights

When you reach the limits of Google Analytics custom reports and you still need more, Google Spreadsheets and the Google Analytics Add-On can take you past sampling, data consistency and dimension challenges.

This post is all about the Google Analytics + Google Spreadsheets workflow. It is an end-to-end example of how you can extract more value out of your Google Analytics data when you work with it in its raw(ish) form in Google Spreadsheets.

Blog Post Traffic Growth and Decay with Age

The end goal is to look at how blog post traffic grows or decays with time and ultimately Forecast Organic Traffic to Blog Posts (coming soon).  But this post sets the foundation for that by showing how the data is extracted, cleaned and organized using a pivot table in order to get there.

There is also a bit of feature engineering to make the analysis possible. To do this we will extract date that the post was posted from the URL and effectively turn the “Date Posted ” and “Post Age”  into custom dimensions of each page. But enough setup. Let’s start.

This posts assumes a few minor things but hopefully will be easy to follow otherwise:

  • Google Spreadsheets Google Analytics Add-On already plugged in
  • Basic familiarity with Regular Expressions aka. RegEx (helpful but not necessary)
  • Basic familiarity with Pivot Tables in Google Spreadsheets
  • Blog URLs with dates as subdirectories eg “/2015/08” (or collect post date as Custom Dimension)

Creating Your Custom Report

Once you have the Google Analytics Add-On up and running this is actually pretty simple. It just takes a bit of trial and error to ensure that you’ve queried exactly what you want. From the Report Configuration tab, most of the time there are only a few, but very important, fields that you will need to worry about: Date Ranges, Dimensions, Metrics,  and Filters.

Google Analytics Add-On Report Configuration

The purpose of this is to look at sessions by blog post URLs by month, over the entire history of the site

I chose the Last N Days because I know that is roughly the age of my site and a date range that is too broad is ok. I chose sessions because this relates to how many people will land on a given landing page in a given month of a given year. So that is all pretty straight forward.

Filters can get a bit tricky. A good practice for setting up filters is to first build them using the Google Analytics Query Explorer. That will enable you to rapidly test your queries before you are ready to enter them into your spreadsheet.

There are several filter operators that you can use but I kept this one fairly simple. It consists of three components (separated by semicolons):

ga:medium==organic; Only organic traffic ( == means equals)
ga:landingPagePath=@20; Only landing page URLs that contain 20  (@= means contains)
ga:landingPagePath!@? Only landing page URLs that do not contain a query string because that’s mostly garbage. (!@ means does not contain)

I used ga:landingPagePath rather than page title because that can be less consistent than URLs. They are more likely to change and will sometimes show as your 404 page title. Blog post URL’s are a more consistent unique identifier for a post but it is important to note that sometimes people will change blog posts URLs. We will deal with that later.

Cleaning the Data for Consistency

Even with good data collection practices, cleaning the data is extremely important for accurate analysis. In my case, I had changed a couple blog post URLs over time and had to manually omit a few that my query’s filter did not catch. In this case, data cleansing becomes very important for two reasons: 1. Posts that are not combined by their unique and consistent URL will show as two separate posts in the pivot table which will skew summary statistics and 2. Posts that are not real URLs with a small number of sessions will really skew summary statistics. Consistency is key for Pivot Tables.

So in this case, for small exceptions, I just deleted the rows. For more common exceptions, I used the REGEXREPLACE function. This is a powerful tool for data cleansing and unique to Google Spreadsheets. It allows you to select a part of a string that matches a RegEx pattern and replace with with whatever you might want. In this case, I just searched what I wanted to remove and replaced it with an empty string. eg.

=REGEXREPLACE(A141,"(/blog)?([0-9/])*/","")

I used this to remove ”/blog” from the URLs that were used before I transitioned from Squarespace to WordPress and date numbers because some had been changed when the post was updated.

Extracting Blog Post Posting Dates

Extracting the date that the post was posted is actually pretty simple. Again, I used another RegEx function, REGEXEXTRACT to do it:

=SPLIT(REGEXEXTRACT(cell,"[0-9]{4}/[0-9]{1,2}"))

The RegEx pattern finds any four digits then a slash and any one or two digits. Then the extracted string is split into two cells by the slash. This yields the year in one column and the month in the next. I combined the month and year of the post date and the month and the year of the Google Analytics date into Date objects so that I could use the DATEDIFF function to calculate the age of the blog post as a dimension of the post. Maybe this is too much gory detail but hopefully it’s useful to somebody.

Finally, we end up with is something that looks like this. This allows for pivoting the data about each post by each month of age.

Google Analytics Add-On Report Breakdown

Step 4: Pivot Tables FTW!

Finally, all this work pays off. The result is one table of blog post URLs by Age and one table of URLs by summary statistics.

Google Analytics data pivot table

The blog post age table allows me to see if the posts show a natural growth or decay of traffic over time. The big purpose of the table is to create the chart below that basically shows that growth and decay don’t look that consistent. But this is an important finding which helps frame thinking about modeling blog traffic in 2016.

(Not shown here, the Google Spreadsheets SPARKLINE function is a great way to visualize Google Analytics Data.)

The summary statistic pivot table will be used for forecasting 2016 traffic. This just happens to be the topic of my next post in which the tables are used to answer 1. What is the likelihood of reaching 25,000 sessions in 2016 if things stay the same?  and 2. What needs to change in order to reach 25,000 sessions in 2016?

So in summary, so far, this post has proven a few things:

  1. If you work with Google Analytics and don’t know RegEx, go learn! Its an incredibly useful tool.
  2. My blog posts do not demonstrate consistent growth or decay
  3. I might just be as big of a nerd as my girlfriend thinks I am.

Hope it might be useful in thinking about similar problems or maybe even creating new ones!

Is Slack Messenger Right for My Team? Analytics and Answers

Slack

From AOL Instant Messenger to WeChat stickers, digital communication has always fascinated me. From the beginning, there has always been so much we don’t understand about digital communication. It’s kind of like GMO; we just started using it without considering the implications.

We are continually learning how to use the the digital medium to achieve our communication goals. And meanwhile, our digital communication tools are ever evolving to better suit our needs. A prime example of this is the team messaging app, Slack.

Slack

Slack has adapted well and I would argue that it has dominated its ecosystem. There are a few reasons why I believe that it’s earned its position:

  1. It’s easy.
  2. It’s flexible.
  3. It’s not too flexible.

As a tool, Slack is malleable enough to form-fit your communication theories and practices and it does little to dictate them. This means that its utility and its effect are less a factor of the tool and more a factor of the our ability to shape its use.

So when the question was posed, “How well does Slack fit our needs as a team?” I have to admit I wasn’t sure. Days later, in my head, I answered the question with two more questions:

How well have we adapted the tool to us?

How well have we adapted to the tool?

The questions felt somewhat intangible but I had to start somewhere and me being me, I asked the data. I’ll admit I haven’t gotten to the heart of the questions… yet. But I did start to scratch the surface. So let’s step back from the philosophy for a minute, walk through the story, and start answering some questions.

So yeah, we tried Slack… Six months ago

A recently formed, fast moving and quickly growing team, we believed that we could determine own our ways of working. In the beginning, we set some ground rules about channel creation and, believe it or not, meme use (hence the #wtf channel). And that was about it. We promised ourselves that we would review the tool and its use. Then we went for it.

A while later, as I mentioned, a manager pointed out that we had never reviewed our team’s use of Slack. It seemed fine but the questions started to crop up in my head. Me being me, I had a to ask the data.

This all happened about the time that I started to play with Pandas. I didn’t answer the questions but I did get frustrated. Then I read Python for Data Analysis, pulled the data out of the Slack API (which only provides data about channels) and went a bit crazy with an iPython notebook.

To answer my theoretical questions, here are the first questions I had, a few that I didn’t and their answers.

How is Slack holding up over time?

Stacked Time Series

Don’t judge me. This was my first go with matplotlib.

This stacked time series shows the number of post per channel (shown in arbitrary and unfortunately non-unique colors) per week. The top outline of the figure shows the total number of messages for each week. The strata represent different channels and the height of each stratum represent the volume of messages during a given week.

It appears that there is a bit of a downward trend the overall number of messages per week. A linear regression supports that. The regression line indicates that there is a trend of about two fewer messages than the week before.

Linear Regression

If you ask why there appears to be a downward trend in total use over time, I think there a few ways to look at it. First, the stacked time series shows that high volume weeks are generally a result of one or two channels having big weeks rather than a slowing of use overall. This makes sense if you consider how we use channels.

We have channels for general topics and channels for projects. And projects being projects, they all have a given timeframe and endpoint. This would explain the “flare ups” in different channels from time to time. It would also explain why those same channels come to an end.

One way to capture the difference between short lived project channels and consistent topic channels is with a box plot. Box plots represent the distribution of total messages per week for each channel by showing the high and low week totals for a channel and describe the range (Interquartile Range) that weekly message totals commonly fall into.

Slack Analytics Channels Box Plot

Each box plot represents a Slack channel. The Y axis scales to the number of messages in that chanel

For a specific example, the channel on the far left (the first channel created, named #generalofficestuff) has had a relatively high maximum number of messages in a week, a minimum around 1 or 2 (maybe a vacation week) and 50% of all weeks in the last six months fall within about 7 and 28 messages with an average of 10 messages per week.

On the other hand, channels on the right side of the chart, more recently created and generally project-specific channels, describe the “flare ups” that can be seen in the stacked time series chart above. If you wanted to look deeper, you could make a histogram of the distribution of week totals per channel. But that is a different question and, for my purposes, well enough described with the box plot. 

So… how is Slack holding up over time?!

The simple answer is, use is declining. Simple linear regression shows this. The more detailed answer is, it depends. As the stacked time series and box plots suggest, in our case, use over time is better understood as a factor of the occurrence of projects that lend themselves especially well to Slack channels. I know what you’re saying, “I could have told you that without looking at any charts!” But at least this way nobody is arguing.

Projects… What about People?

Another way to look at this questions is not by the “what”, but by the “who.” Projects, and their project channels are basically composed of two components, a goal/topic and a group of people that are working toward that goal. So far we have only looked into the goal but this leaves the question, “are the people a bigger factor in the sustainability of a channel than the topic.

I looked at this question many ways but finally, I think I found one visual that explains as much as one can. This heat map shows the volume of messages in each channel per person. It offers a view into why some channels might see more action than others and it also suggests how project/channel members, and the synergy between them, might affect a channel’s use.

Slack Analyttics Hierarchical Clustering Heatmap

Volume of messages is represented by shade with Users (user_id) are on the Y axis and channels are on the X axis. Hierarchical clustering uses Euclidian distance to find similarities.

What I think is most interesting in this visualization is that is shows the associations between people based on the amount of involvement (posts) in a channel. The visual indicates that perhaps, use is as much a factor of people as the channel’s project or topic, or time.

There are, of course, other factors. We cannot factor out the possibility of communication moving into direct messages or private groups. But again, that is another question and beyond the bounds of this investigation.

So what?

So we got a glimpse at the big picture and gained a pretty good understanding of the root cause of what motivated the question. This is my favorite part. We get to sit back, relax, and generate a few new hypotheses until we run into a new question that we can’t avoid.

What I think is coolest about the findings is that it suggest a few more hypotheses about what communication media our team’s communication occasionally moves to and what media it competes with. Now these investigations start to broach the fundamental questions that we started with!

There are a few things at play here. And the following are just some guesses. It could be that email dominates some projects or project phases because we are interacting with outside partners (people) who, for whatever reason, cannot or will not use Slack. Sorry Slack. It could also be that, due to the real world that we live in, communication is either happening over chat apps like WeChat or WhatsApp.

In either case, we return to the idea of people adapting to tools that are adapting to people. The use of digital communication tools reflects the people who use them and each person’s use reflects the structure and offerings of the tool.

And what’s next?

Hopefully, if you read this you have more questions about this reality and I might (probably) go on to try to answer a few more. I think there are a few interesting ways to look at people are norming with Slack.

Maybe, you are interested in how all this Pandas/matplotlib stuff works because I am too. So I think it will be fun to post the iPython notebook and show how it all works.

Otherwise, it will be interesting to watch how this tool and this team continue to evolve.

Agile Strategy for Data Collection and Analytics

If you are like most people doing business online, it seems like there is always a long list of digital to-dos that are somewhere between “that will happen in Q4” and “that should have happened by Q4 last year.” Aside from the constant stream of daily hiccups that arise due to the asynchronous nature of our medium, if you are like most others managing a website, you face broader development challenges of slow servers, uncooperative CMS’s, or lame mobile experiences impacting your online success.

This is not failure that you have to accept! Let me introduce you to a little thing that has been bouncing around in the software/web development community that will make your online business operations feel less like swimming in peanut butter. It’s called Agile Development and it’s sexy. It’s fast and sexy like a cheetah wearing high heels.

We can apply these principles of Agile Development to data collection, analytics, and optimization to provide two exceptional benefits: rapid access to data and insight, and safeguards against constantly changing web properties.

For data collection, analytics, and optimization:

  • An Agile approach provides action before traditional methods provides insight
  • An Agile approach safeguards against the constant variability of the web medium

“If you fail to plan, you are planning to fail!” — Ben Franklin

Learning from Feature Driven Development

The Agile Development concept covers an array of development methodologies and practices, but I would like drill into one especially coherent and efficient method of Agile called Feature Driven Development.

Feature-Driven Development essentially works like this: an overall project is planned as a whole then it is separated into discrete pieces and each of these pieces is designed and developed as a component and added to the whole. This way, instead of having many semi-functional components, the project’s most valuable components are complete and fully functioning.

Phased Implementation (Not Iteration)

Because you might have already heard something about Agile Development, it is important, at this time to dispel the notion that Agile development is defined by iterations upon products. In a sense that is true but mostly it is the complete opposite of the Agile approach. The only iterations that happen are the planning, implementation, and completion of a new feature. This is not the same as adding layers upon existing features (more on this with the Definition of Done). The difference here is planning and the ability to see the project and business objectives as a whole.

Step 1: Develop an Overall Model

You must plan! Planning in an organization can be hard to motivate and difficult to initiate, but these planning steps will actually provide you with better, more actionable data sooner than not.

Understand the system. This is digital. There are a lot of moving parts. It is very important to really know how your digital presence affects your physical business and your overall business strategy and vice versa. Additionally, there are likely many components within your business that are (or could be) affected by the data that can be collected. This leads to my next suggestions.

Ask questions and seek multiple perspective. This is time to confront your assumptions about your business, your pain-points, and your data needs. It is important to really know the processes and decisions that are taking place and how they are (or are not) or could be affected by data. Communicating with those who interact with and make decisions on the data at any level will be extremely insightful.

Be strategic. Look at the big picture of the future, define your goal and work backwards. Agility does not come by luck but rather by being aware of and prepared for all foreseeable possibilities. Consider how things will change and what parts of your digital presence are shifting. How will redesigns, platform changes, and code freezes affect your strategy? This is generally the best way to face an analytics problem so this step applies very well to analytics. Agile was created to solve the problems of being short-sighted and reactive.

Step 2: Define the Parts of Your Plan

This is where the fun starts. There are multiple ways an analytics strategy can be divided and further subdivided into parts. When considering how to divide the project into parts, the goal should be to get to define parts at their most discrete, independent or atomic level. This will be helpful in prioritizing the parts into steps. Ultimately,  these parts can be regrouped based on similarity and development implementation process.

By Web Property and Section

An organization’s web presence is often not limited to a single site or app. There may be different properties or sections of web properties with different intents. Inevitably, some of these properties or sections will have a bigger impact on your organization’s goals and thus would be prioritized differently.

By Data Scope (User, Page/Screen, Event)

Each web properties has layers of data that can be gathered from it. Data about the user in the app or website’s database, information about the content of the page, and information about how the user interacts with the app or website can all be thought of discretely. These differ in terms of intelligence and the actual development work that is required to collect the data.

By Data Use

Another way to divide up the data-collection needs is by end use. For instance, you may be an ecommerce store that has different people or teams who are merchandising, planning and creating content, managing email, social campaigns, or paid media campaigns and/or optimizing the application and user experience. The data needs for each initiative will often overlap with other initiatives but sometimes data needs will be very different from others. These different data needs can be thought of as different parts of your strategy.

By Data Depth

Think 80/20 rule in terms of granularity. Some data is instantly useful. For instance, you may not be tracking clicks on your main call-to-actions or “Buy” buttons. These clicks are likely key micro-conversions and having this interaction insight can literally change your strategy overnight. Another layer of depth would be knowing what product was added to the cart as part of that event. A further layer would be configuring Google Analytics’ Enhanced Ecommerce to understand how users interact with products from the product page to the checkout. Each of these examples provide varying depths of data but also require varying amounts of development time.

Other features like Google Adwords Dynamic Remarketing and Google Analytics Content Groupings can be thought of similarly as they need more information to associate with the user or page.

Step 3: Prioritize

This is the most important step. This is where the unique value of the Agile approach really shines. This can drastically lower the cost and shorten the time to data-driven action. All the planning and foresight that took place before can be leveraged to make the right decisions for the most success.

Consider Goals

Duh. The whole reason you are gathering data is to be data-driven. The parts of your plan that most directly affect your top-line goals should be at the top of the list. Think about every time you have said or heard “If we only knew abc we could achieve xyz.” Now depending on the value of xyz, prioritize data collection.

Consider Time

This is what Agile is all about! With goal impact in mind communicate with relevant parties and your development team or partners to understand how long it will take to implement the code to start gathering data. Sometimes the value of data will scale to the development time, other times it may be as simple as using a Google Tag Manager click listener on calls-to-actions to send events to Google Analytics within a few minutes. Overall, its good to have some data to orient your decisions right away so go for the quick wins first and work with that as code is being implemented to get the real data gold.

Consider Cost

Unfortunately, bottom lines still exist and often development resource cost will have to be justified in implementing code to gather data. Some data collection might be cost prohibitive but it is possible that by gathering data that is easier to gather, such as standard Ecommerce implementation will give you the rationalization to get more in depth data down the road. Overall, get the valuable data that comes cheap, squeeze the life out of it until you need more depth.

Step 4: Implementation Cycle (Plan, Implement, QA)

Now, for the moment we’ve all been waiting for, let the collection begin! This is the step that most people think of when they think of Agile development; sprinting to complete a feature and then releasing it.  For Agile analytics, this works the same way. Now that there is a list of analytics “features” or streams of data that have been prioritized each step should be planned, implemented and tested successively.

Plan

This is a more detailed plan than the overall model. This plan defines how the data will be collected. For example, this is when Google Analytics Event or Custom Dimension naming conventions would be defined and documented. Be explicit. This will really improve the efficiency of the process.

Implement

Buy your development partner beer and pizza and pass your documentation on to them. Keep them happy and maintain a good relationship. There will be more implementations in the future. Hopefully, your documentation is clear but be open and responsive to questions; this is all about speed and accuracy.

Quality Assurance

This should happen in your development environment so that when the code is implemented on the site, the data that is reported is clear and accurate. Be thorough as this implementation should stay this way well into the future. If changes are to be made, be discreet, just as in implementation.

These three steps can happen simultaneously. For example, planning can happen on a future part as implementation and QA is happening on the present part.

Start Optimizing!

Agile is not simple but it’s also not magic. Speeding up the time to data-driven action is made possible by the planning that happens up front. Being proactive is not only a practice of Agile but also general best practice in analytics. It is the planning that makes agile efficiency possible. It may seem difficult, but putting in the effort to plan will put you in a position to act proactively agilely into the future.  Happy optimizing!