The Story Behind the New

A little over a year and a half ago, we had a dramatic rethink of the technologies and development workflows for building with WordPress.

Our existing codebase and workflows had served us well, but ten years of legacy was beginning to seriously hinder us from building the modern, fast, and mobile-friendly experiences that our users expect. It seemed like collaboration between developers and designers was not firing on all cylinders. So we asked ourselves the question:

“What would look like if we were to start building it today?”

A New Beginning: Prototyping and Iterating

We’d asked ourselves this question before, and had our fair share of initiatives that didn’t result in useful change. Looking back, we were able to pinpoint our biggest mistakes: we’d been starting with a muddy vision, and were trying to solve an ill-defined problem. These insights really helped us change our approach.


One of the original Calypso prototype screens, listing all of your WordPress sites.

Calypso, the codename for this new WordPress admin interface project, started differently. To present a clear vision, we built an aspirational HTML/CSS design prototype — based on clearly defined product goals — that allowed us to imagine what a new could look like when complete. We knew it would change over time as we launched parts to our users, but the vision provided all of Automattic with something to aim for and get excited about.

Once the Calypso prototype was in a good place, the early days of development were all about making tough decisions such as which language to use, whether to use a framework, and how we would extend our API. Automattic had just acquired Cloudup, an API-powered file-sharing tool built with JavaScript. The Cloudup team showed us a solid, maintainable, and scalable path towards making completely JavaScript-based and API-powered.

Since WordPress is a PHP-powered application, our company-wide development skill-set has historically been PHP-heavy with a sprinkling of advanced JavaScript. This made Calypso intimidating to other engineers and designers at the company for much of the first six months of its development — we were building something that few people could jump in on.

Even core Calypso project team members had to get over our intimidation. None of us were strong JavaScript developers. But as each day passed our experience built, we made mistakes, we reviewed them, we fixed them, and we learned. Once we had the project moving, we set better examples for other engineers, and shared our knowledge across the company.

One great change came out of building an early design prototype: improved collaboration using GitHub. Calypso prototyping was done collaboratively between a handful of designers in GitHub; although many of us had long used GitHub for personal projects it was relatively new for internal projects, which historically used Trac for most project management and bug tracking. Using GitHub helped us see how much easier internal collaboration could be, and how to allow for much greater feedback on individual work being done.


Peer code reviews show no sign of slowing up and are now widely accepted.

As GitHub had worked so well for the prototyping stage we switched for all Calypso development, allowing us to harness the pull request (PR) system for peer code reviews, and build our own custom GitHub-based workflow. Code reviews were new for many developers — traditionally at Automattic, we have had no systematic peer code review system outside of the VIP team’s daily code review of client sites. Code review, though it initially added to the intimidation of starting to work with Calypso, greatly increased the quality of our codebase and helped everyone level up their JavaScript skills.

What started as a team of seven people working on Calypso quickly spread to a cross-section of teams with ten, then 14, then 20 Automatticians actively working in the Calypso codebase. Two months after the launch of the first Calypso-powered feature on, we had 40 contributors working on Calypso across five different teams. We iterated over the next year with the “release early, release often” Automattic mindset, launching 40 distinct Calypso-powered features on with over 100 individual contributors.

By the middle of 2015 the Calypso codebase was in good enough shape to be used outside of the web browser. Since Calypso is entirely JavaScript, HTML, and CSS, it can run locally on a device with a lightweight node.js server setup. Using a technology called Electron, we built native desktop clients running the same code bundled inside the applications. We started work first on a native Mac desktop app, which is now available, and continued that work on soon-to-be-launched Windows and Linux apps. Seeing these apps come together and using them internally really started to justify all the hard work we’d spent building the Calypso codebase.

Open Sourcing Calypso, the Power Behind

One of our Calypso developer hangouts in progress, and Team IO, who built the Calypso editor, at our all-company Grand Meetup in October.

Over the past year and a half, Calypso has gone from an idea to an aspirational prototype to a fully functioning product built, launched iteratively, and used by millions of users. Internally, it’s been a period of great change and growth. We’ve embraced cross-team collaboration through GitHub and peer code reviews through the PR review system, gone from just a couple of great JavaScript developers to a company full of them, and seen incredible collaboration between designers and developers on a daily basis.


A handy chart to show the differences between the old and new (pdf, img)

We’re proud to be able to open source all of the hard work we’ve put in, and to continue to build on the product in an open way. You can read more about opening up Calypso development on our CEO Matt Mullenweg’s site.

Over the next few months, we’ll publish more in-depth posts exploring the technicals and workflows behind Calypso: how we manage our own unique GitHub flows, how we’ve used other popular open source libraries like React and concepts like Flux, and our experiences bundling and launching native app clients. Keep an eye out for those by following this blog (in the bottom right), and in the meantime, check out the active Calypso codebase as we continue to iterate on it.

0939030c354e4efefe655fa5107fd888Andy Peatling
Calypso Project Lead

Data for nothing and bytes for free is a freemium service, meaning that our awesome blogging platform is provided for free to everyone, and we make money by selling upgrades. We process thousands of user purchases each week and you might expect that we know a lot about our customers. The truth is, we are still learning. In this post, we will give you some insights into how we try to understand the needs and behaviors of users who buy upgrades.

We know there are many kinds of users and sites on To understand the needs of users who purchase upgrades, one would naturally analyze their content consumption and creation patterns. After all, those two things should tell us everything about our users, right?

Somewhat surprisingly, the median weekly number of posts or pages a user creates, and the median weekly number of likes and comments a user receives is zero! And I’m not talking about dormant users. These are our paying customers. There are lots of reasons for this, like static sites that don’t need to change very often, or blogs with a lower frequencies than weekly. But it doesn’t give us much data to work with.  Well, let’s start with something that IS known about every user: their registration date.

Thousands of users register daily on What does the day of the week on which the user registered with us say about their purchasing preferences? Is it possible that users who register during the week are more work-oriented, and users who register during weekends are more hobby oriented? To test this question, we’ll look at purchases that were made in our online store between March and September 2013.

We’ll divide the purchasing users in two groups: those who registered between Monday and Friday (let’s call them “workweek users”) and those who registered during Saturday and Sunday (let’s call them “weekend users”).

Side note: To the first approximation, we use registration GMT time to label a user as “registered on weekend” or “registered during the workweek”. We also ignore weekend differences that exist between the different countries. These are non-trivial approximations that make the analysis simpler and do not invalidate the answer to our question.

To examine the purchasing patterns of these groups let’s calculate the fraction of products purchased. For example: the most prevalent products in both categories were [domain mapping and registration]( These two products, that are usually bought together, are responsible for about 35% of upgrades bought by our workweek and weekend users. Let us now continue this comparison using a graph:


What do we learn from this comparison? Almost nothing. Which is not surprising, as purchasing distribution pattern is mostly determined by factors such as user preferences, demand, price etc.

Let’s look for more subtle differences. We’ll use a technique known as a Bland / Altman Plot. These British statisticians noted that plotting one value versus another implies that the one on the X axis is the cause and the one on the Y axis is the result. An alternative implication is that the X axis represents the “correct value”. None of these is correct in our case. We are interested in understanding the agreement (disagreement, to be more precise) between two similar measurements, when none of the two is superior over another. Thus, instead of plotting the two closely correlated metrics (purchase fractions in our case), we should plot their average values on the X axis and their difference on the Y axis. In this domain, higher X axis values designate more prevalent products, positive Y values designate preference towards the working days and negative Y values designate preference towards the weekend. This is what we get after transferring the fractions to logarithm domain:


Now things become interesting. Let us take a look at some of the individual points:


As I have already mentioned, domain mapping and registration are the most popular products. Not surprisingly, these products are equally liked by weekend and working week users. Recall our initial intuition that users who register during weekends will be more hobby-oriented and users that register during the week will be more job oriented. We now have some data that supports this intuition. Of all the products, private registration, followed by space upgrades have the strongest bias towards weekend users. Indeed, one would expect personal users to care about their privacy much more than corporate ones. Being more cost-sensitive, personal users are more likely to purchase space upgrade and not one of plans. The opposite side of the division line makes sense too: blocking ads is the cheapest option to differentiate a workplace site, followed by custom design. These two options are included in all our premium plans, but I can understand how a really small business would prefer buying some individual options.

Another note: If you are worried about statistical significance of this analysis, you are completely right. I don’t show this here, but exactly the same picture appears when we analyze data from different time periods.

So what?

As an app developer, you will at some point be frustrated about how little you know about your customers. Don’t give up! Start with the small things that you know. Things such as day of the week, geographical location and browser version may shed useful light and you can build out a picture from there, adding to it bit by bit. Having such information is like gardening: it sounds like a lot of work, but you might be surprised at what you can get from a little investment of time. With determination (asking lots of questions) and creativity (looking at a problem from new angles, starting with information you already have) and the right tools in your hands, you can learn something about your users and grow your garden of understanding.

Authentication improvements for testing your apps

We’ve just made it easier for developers to authenticate and test API calls with their own applications.

As the client owner, you can now authenticate with the password grant_type, allowing you to skip the authorization step of authenticating, and logging in with your username and password. You can also gain the global scope so that you no longer need to request authentication for each blog you wish to test your code with.

This is especially useful to contributors of the WordPress Android and iOS apps, which previously required special whitelisting on our part.

Here’s an example of how you can get started with using both these features:

Note that if you are using 2-step authentication (highly recommended) you will need to create an application password to be able to use the password grant_type.

$curl = curl_init( "" );
curl_setopt( $curl, CURLOPT_POST, true );
curl_setopt( $curl, CURLOPT_POSTFIELDS, array(
    'client_id' => your_client_id,
    'client_secret' => your_client_secret_key,
    'grant_type' => 'password'
    'username' => your_wpcom_username,
    'password' => your_wpcom_password,
) );
curl_setopt( $curl, CURLOPT_RETURNTRANSFER, 1);
$auth = curl_exec( $curl );
$auth = json_decode($auth);
$access_key = $auth->access_token;

As noted above, these are only available to you as the owner of the application, and not to any other user. This is meant for testing purposes only.

You can review existing authentication methods here.

If you have any questions, please drop them in the comments or use our contact form to reach us.

A brand new Developer Site

As you may have noticed, we’ve just relaunched the Developer site (the very one you’re reading right now!) with a brand new look and feel!

We’ve rebranded the site to match the overall aesthetic as well as to align with the new user management and insights sections we launched just a few weeks ago.


The goal of the redesign was not only to modernize the site but make it easier for you, our partners and third-party developers to find the information you are looking for. In addition, we’ve reviewed all of our existing documentation and past blog posts to make sure the information is accurate and relevant.

Over the next few months, you’ll see more updates to the site and more frequent blog posts from our team.

I’d personally like to thank the team that worked on the relaunch with me: Raanan, Kelly, Kat, Justin, and Stephane.

If you’d like to let us know what you think of the new site, report a bug, or have suggestions for future improvements, please comment below, tweet at us @AutomatticEng or contact us privately.

An efficient alternative to paging with SQL OFFSETs


Running means having multimillion-record database tables. Tables which we often need to batch-query.

Provided we could hardly select (or update, etc) millions of records at once and expect speed, we commonly have to “page” our scripts to only handle a limited number of records at once, then move on to the next batch.

Classic, but inefficient, solution

The usual way of paging result sets in most SQL RDMS is to use the OFFSET option (or LIMIT [offset], [limit], which is the same).

SELECT * FROM my_table OFFSET 8000000 LIMIT 100;

But on a performance level, this means you’re asking your DB engine to figure out where to start from all on its own, every time. Which then means it must be aware of every record before the queried offset, because they could be different between queries (deletes, etc). So the higher your offset number, the longer the overall query will take.

Alternative solution

Instead, of keeping track of an offset in your query script, consider keeping track of the last record’s primary key in the previous result set instead. Say, its ID. At the next loop instance, query your table based on other records having a greater value for said ID.

SELECT * FROM my_table WHERE id > 7999999 LIMIT 100;

This will let you page in the same way, but your DB’s engine will know exactly where to start, based on an efficient indexed key, and won’t have to consider any of the records prior to your range. Which will all translate to speedy queries.

Here’s a real-life sample of how much difference this can make:

mysql> SELECT * FROM feeds LIMIT 8000000, 10;
10 rows in set (12.80 sec)

mysql> SELECT * FROM feeds WHERE feed_id > 12958559 LIMIT 10;
10 rows in set (0.01 sec)

I received the very same records back, but the first query took 12.80 seconds, while the alternative took 0.01 instead. :)

PHP/WordPress example

// Start with 0
$last_id = 0;

do {
    $blogs = $wpdb->get_results( $wpdb->prepare(
        'SELECT * FROM wp_blogs WHERE blog_id > %d LIMIT 100;',
        $last_id // Use the last ID to start after
    ) );

    foreach ( $blogs as $blog ) {
        // Do your thing!
        // ...
        // Record the last ID for the next loop
        $last_id = $blog->blog_id;
// Do it until we have no more records
} while ( ! empty( $blogs ) );

Like elasticsearch? We do too!

Elasticsearch tools

Elasticsearch, if you’re not familiar with it, is defined as a distributed restful search and analytics tool.

When it comes to implementing such an infrastructure, our developers not only face the challenges involved in indexing tens of millions of sites with grace and skill, they also write quite extensively about their related adventures, so others can benefit from their experiences.

You can find a plethora of posts on Greg Brown’s blog, under the appropriate tag. Subjects ranging from performance and scaling, all the way to “Elasticsearch, Open Source, and the Future“. And in true Automattician fashion, he isn’t even shy about recognizing his mistakes.

But Greg is not alone! Xiao Yu also recently wrote about the tools he uses, and a plugin he concocted for his own needs:

I’ve taken all that I wished I could do with both of those plugins and created a new Elasticsearch plugin that I call Whatson. This plugin utilizes the power of D3.js to visualize the nodes, indices, and shards within a cluster. It also allows the drilling down to segment data per index or shard. With the focus on visualizing large clusters and highlighting potential problems within. I hope this plugin helps others find and diagnose issues so give it a try.

How’s that for advanced? :)

Platform Updates: Batching Calls, Privacy Settings, and IDs

We’ve made a few more updates to our APIs recently that we wanted to share with you.

The biggest update is a new query parameter that’s now available on all endpoints. The new parameter allows you to batch certain calls together, so you only need to make one request to get related data instead of two or three.

Since we released our APIs, we’d always return a list of related endpoints in the “meta” response with a series of links:

"meta": {
        "links": {
            "self": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238",
            "help": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238\/help",
            "site": "https:\/\/\/rest\/v1\/sites\/3584907",
            "replies": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238\/replies\/",
            "likes": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238\/likes\/"

Now, by passing ?meta=site, you can automatically get the data from the above endpoints in the original response. Let’s take a look at an example.

Say you’re loading a specific post but you want to know the name and description of the site the post was on. You can do this by making a call to:

Which will give you a response like the following:

"meta": {
        "links": {
            "self": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238",
            "help": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238\/help",
            "site": "https:\/\/\/rest\/v1\/sites\/3584907",
            "replies": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238\/replies\/",
            "likes": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/21238\/likes\/"
        "data": {
            "site": {
                "ID": 3584907,
                "name": " News",
                "description": "The latest news on and the WordPress community.",
                "URL": "http:\/\/",
                "jetpack": false,
                "subscribers_count": 8396934,
                "meta": {
                    "links": {
                        "self": "https:\/\/\/rest\/v1\/sites\/3584907",
                        "help": "https:\/\/\/rest\/v1\/sites\/3584907\/help",
                        "posts": "https:\/\/\/rest\/v1\/sites\/3584907\/posts\/",
                        "comments": "https:\/\/\/rest\/v1\/sites\/3584907\/comments\/"
                "is_private": false

You can also pass multiple values in the meta query string. If you wanted the site endpoint and a list of likes for a post you can just pass "site,likes".

Two other updates we made are new responses:

  • We now include the value of privacy setting in the site information endpoint. A boolean value will be included as is_private.
  • We now include a global_ID response for all posts. This is a unique ID that you can use to identify posts if you are loading posts from multiple blogs in your application.

We hope you enjoy these updates. We’ll be making more improvements soon!

Originally posted on VIP:

One of the great things about developing for WordPress is the number of tools available for developers. WordPress core ships with a bunch of useful features (e.g. WP_DEBUG) with many more built by the community (like our own Rewrite Rules Inspector and VIP Scanner) that make development and debugging a breeze. The hardest part is getting your environment set up just right: knowing what constants to set, what plugins to install, and so on.

That’s why we built-in the Developer plugin. It’s your one-stop resource to optimally configure your development environment by making sure you have all the essential settings and plugins installed and available.

If you’re a WordPress developer, we highly recommend installing this plugin in your development environment. You can download the plugin from the Plugins Directory or directly from your WordPress Dashboard (Plugins > Add New).

Here’s a quick walk-through:

If you’d like…

View original 45 more words

Originally posted on Barry on WordPress:

Yesterday, Valentin Bartenev, a developer at Nginx, Inc., announced SPDY support for the Nginx web server. SPDY is a next-generation networking protocol developed by Google and focused on making the web faster. More information on SPDY can be found on Wikipedia.

At Automattic, we have used Nginx since 2008. Since then, it has made its way into almost every piece of our web infrastructure. We use it for load balancing, image serving (via MogileFS), serving static and dynamic web content, and caching. In fact, we have almost 1000 servers running Nginx today, serving over 100,000 requests per second.

I met Andrew and Igor at WordCamp San Fransicso in 2011.  For the next six months, we discussed the best way for Automattic and Nginx, Inc. to work together. In December 2011, we agreed that Automattic would sponsor the development and integration of SPDY into Nginx. The…

View original 250 more words