Hi, my name is Luis Carli, I design and develop data visualizations.
Take a look at my projects, follow my tweets or write to


An animated guide to Frequency Trails (aka Joyplots)

3 October 2017

I did a visualization some time ago using frequency trails, the biggest reponse I got was: How the $%&* I read this??
Well, look no further, I got some great step by step animations just for you!

Organized complexity

18 August 2017

This is an old project of my, but a project that I still hold close to my heart. It was more than time for me to go through my old folders and do a new documentation about it, enjoy!

Deeplearn.js: a hardware-accelerated machine intelligence library for the web #

On one side we have the web, which is today the best platform for developing interactive visualization interfaces, showing no sign of loosing this position any time soon, quite the opposite. It's highly accessible both from a coding point of view (just start typing in a html file), as from a learning point of view (plenty of resources), from a sharing point of view (all computers come with a browser) and many others.

On the other side we have machine learning applications, specially deep learning, which sorely need visual interfaces: for giving visibility to it's inner workings, for providing the unique help of fast feedback through iteration, and others. So what a great match can be created with a library like Deeplearn.js!

Roughly a year ago the Google Brain team put out an interactive neural network playground. It run on a custom tiny neural network library based – among others – on the work done by Andrej Karpathy’s on convnet.js. So, just imagine what kind of doors Deeplearn.js and similar works could do.

Here’s how far you’re likely to get from America’s largest cities #

The Washington Post graph team has been on a roll lately, also worth seeing their scrolling piece on the"Travel the path of the solar eclipse" (with an accurate shadow shape that mutates as you scroll) and Denise Lu piece on all eclipses of our lifetime.

Our Broken Economy, in One Simple Chart #

Most Americans would look at these charts and conclude that inequality is out of control. The president, on the other hand, seems to think that inequality isn’t big enough.

Beautiful chart that exposes with great clarity how inequality – as we know today – is a recent phenomenon that is getting progressively worse. Also worth seeing, this video from 2013 made in partnership between the Economic Policy Institute and Periscopic, were we get an explanation of the forces behind this created inequality.

On the desire of tooling automation

5 August 2017

Lately I've been developing internal visualization tools and proof of concepts for my company. One of those do complex computations on top of user input to generate tailored charts and tables, which are configurable, so that our research team can put the final touches before sharing them with clients.

The column names of the tables on the figures are editable, most of the tables have the same column names, but not all of them. Happens that the column names needed to be constantly edited across all tables, a task that got quickly repetitive for the research team. As a consequence, there was a feature request to automatically link columns with the same name, so that as one get edited all the same ones on other tables would update.

Consequence of our modern culture, of what's deem valuable, there's a constant desire for automating too early any repetitive task. Users should do less and the tools that they use should do more. Nothing wrong here, the problem is that early automation can negatively increase the complexity of the system while often restraining its flexibility.

Rising a system to a higher abstraction level too soon is often times a overvalued task, not because it's not useful to do it nor better to have a higher abstraction, but because the longevity and quality of most solutions rarely pay off the time and energy of their development. Part of the problem is that the ideal of automation is too easy to sell, while the quality of solutions is too hard to properly test until you're locked down inside the limitations of the new "better" abstraction (with the prospect of now double the cost to move to another solution).

This applies to a lot of areas and problems, for example the development of tools for automating the development of sites, data visualizations, machine learning; going down a level we have the development of tools for automating the build and deploy of our code (the state of the javascript ecosystem is a particular interesting example here); going a level down again and we have an infinite discussion on the quality of coding languages, why you should move to "X" (or why we developed "X") to solve "Y".

Still, we're always operating at a certain abstraction and automation level of a certain ecosystem, and improvements in our society and technology often come from the development and adoption of new – more powerful – abstractions and automations. But I believe it's bad to have those as our main direct goals. They are better being a longer arc, a consequence of continuously developing and iterating products on the established abstraction levels.

About the user request of my story, I could have tried to implement a "smart" automatic cross editing of table cells, a new system that would make the life of the research team better and that could be reused in all future products of the company. But how long would it take to pull that off? How much more complex and hard the code and interface would be to accommodate that feature among many others? How long until that abstraction breaks, because it overweighted one facet of the work of the research team?

Instead of rushing to a new complex level of automation, I took another approach. By watching the research team use my tool, I saw they converged in always doing the same edits on the column names, every time there was a column named for example "A" they would convert it to "B". So I did a list of those changes and applied them to the names computed by the tool. It ended up that this covered all the column editing that the research team was doing.

Relief maps by Anton Balazh #

Earth geological features, like mountain ranges, are hard to fathom on bigger scales. Satellite images aren't close enough, plane pictures aren't farther enough, digital images don't have enough resolution, etc. But those relief maps, with its elevations and depressions slightly exaggerated, are a great middle ground for all those previous shortcomings.

The evolution of trust #

Beautiful work from Nicky Case on how game theory can shape our relations. He takes the time to easy in the reader incrementally into the concepts through nice interactive explanations. We're invited to learn the intricacies of the presented systems by playing with them, turning knobs and seeing the simulations unfold. Nicky calls those kind of interactive pieces explorable explanations, it's worth taking a look on his previous works.

Joyplot: what news outlet publish vs what we react to #

The last development on the joyplot saga, this animated view between publishing and reacting is an efficient use of the technique and delivers a lot of information in a well packed figure.

For the readers who have not been following the whole history, there's been a twittersphere surge on the (just recently (re?)named) "joyplots". With plenty of discussions and a prolific production from the R folks (with even a dedicated ggplot lib); including also a collaboration, by no more than pure coincidence, from yours truly.

Still, I much prefer the previous denomination of frequency trails, even if less tied to the interesting story of the appropriation done for the album "Unknown Pleasures". Setting aside naming problems, I believe the technique is great, and I don't think it caries such a difficult learning curve as some have defended.

July 2017

Summers Are Getting Hotter #

Extraordinarily hot summers — the kind that were virtually unheard-of in the 1950s — have become commonplace.

When the data is good, the overlay of two shifted distributions can be extremely powerful. The mastery of the NYT graph team push the main chart of this article to the next level. It's interesting to compare it to the original chart of the academic research.

Artificial Intelligence Is Stuck. Here’s How to Move It Forward. #

[...] neither of our two current approaches to funding A.I. research — small research labs in the academy and significantly larger labs in private industry — is poised to succeed.

Academic labs are too small. And while big companies have the size and resource needed for significant breakthroughs, their bottom lines and quarterly reports push them to other directions.