Creating tables with Pipelines

Our Pipelines feature is a powerful tool which allows report automatization. Let's say, for example, that you want to create an insight which shows dynamics of Argentina FX rates and a table with latest data, including percent changes. Wait! Did you just say "a table"!?

With Pipelines you can transform datasets in multiple ways (for many other uses, see here). One possibility is to reduce the input dataset to a smaller version of itself, and then embed the outcome in an insight as a table. Let's show you how to do it, step by step.

How to create a table

  1. To begin with, let's fetch the input dataset. This is the dataset we want to work on, which will be later reduced to a table. In our example, we'll be working with the Financial - Argentina - FX premiums - Daily dataset.

  2. If the input dataset has columns which we want to discard, let's use the "Select" step to keep only what we need for our table. In this case, it will be columns associated with the different FX rates (Blue, Dolar MEP, Dolar CCL, Dolar Mayorista, Dolar Oficial). Feel free to skip this step if you don't need to filter columns.


  1. Our dataset now has the required variables, but it is shown "unstacked". That is, all variables are columns of our dataset. We need to stack them in order to make some calculations and then reduce its dimension. For that, we will use our "Manual Configuration" step and include the following code:


  1. Once melted, publish the output as a new dataset, which will be used in a second pipeline.


  1. Create a new pipeline and fetch the new dataset, which has the variables we need but stacked one above the other. Before making any calculation, we need to use the "Regroup Entities" step, in order to have only Date and Variable as the dataset's entities.


  1. Now we can make any calculations we want. In our example, we are calculating daily, monthly and YTD percent changes using the "Apply Transform" step and then use the "Rename Variables" step, but feel free to experiment other alternatives!

  2. Here comes the tricky part. We want a table that updates automatically and shows the latest available data. So, we need to use our "Manual Configuration" step again, this time to tell the pipelines engine only to keep the latest data.

{"type":"select-rows","calculation":"if(#'Date'=max(#'Date'), True, False)"}


  1. Now we have a reduced dataset, which looks very much like a table! If you want, you can continue applying different steps or directly publish the outcome as another dataset.

  2. Finally, embedding the table in an insight. Create a new insight and use the following snippet. Remember to change DATASETID with the dataset ID (for those who don't remember, the ID can be found in the corresponding dataset's URL).

<iframe width="100%"  src="/datasets/DATASETID/embed"></iframe>

The outcome should look like the table below. You can check our insight example here.


Written by


Economist. Doing macro research @Seido. Building something different @Alphacast

Related insights

  •

    A short guide to Argentina's Mutual Funds Industry Analysis

    In this short tutorial, we will guide you on calculating ranks of YTD Total Returns for different funds and management companies.

    Alphacast hosts a number of daily updated datasets of Argentinas Mutual Funds Industry. Two weeks ago we began publishing detailed datasets for based on CAFCI daily reports, mostly in

  •

    A short guide to Argentina's Financial and Monetary Data

    There are more than 2.000 datasets in Alphacast, and there are plenty of hidden gems. This is a short "Must see" guide for those interested in Argentina's financial and monetary data.

    **Would you like to know more?

  •

    How can I reshape my dataset from "Long" format to "Wide" format and otherwise?

    If you work with data you probably have come to the scenario where you have found the data you need but not in the shape that you need. A typical example is when data that should be row values are columns or otherwise, a situation