Friday Update - August 13th

Hi everybody! Another amazing week with lots of news. We had another rush of new users, the highest since June's launch, and with that feedback is flowing faster than ever!

Webinars

Last Wednesday we began the 4-sessions open webinar on "Economic Analysis Automatization" where we explore basic charts and dashboards creation, data uploading, excel integration, and more. To subscribe to next week's Webinar click here. If you are already subscribed you don't have to do it again.

Minami

We began our three-week sprint with the super talented design studio MINAMI. They are pushing us to think out of the box on how to implement a full overhaul of the User Experience. The final objective is to let the user have a smoother experience in every aspect and feature of the platform and make a simpler discovery and learning process. Great new ideas are showing up.

minami.png

Pîpelines, Data Ingestion & Data structure

Data Ingestion

The current version of our API, allows for datasets to be uploaded into our platform. However, we used a different internal process for ingesting data from public sources, which allows us to perform data transformations, aggregation, and further manipulation. We are now isolating such processes allowing for data to be "Adapted" into the final Dataset. As we do that, we are also storing every raw dataset version, and keeping track of every difference.

Data Ingestion

Data Structure

This new data ingestion platform also allows working with data that has multiple dimensions. In the past, we would only accept Date, and a general Entity dimension, that were used across different Variables in the Dataset:

DateEntityProductionImportsExports
2020-01-01USA21232300025000
2020-02-01Canada32232400027000
2020-03-01Mexico54232200021000

Our API now works with "Columns" rather than Variables, and each column can be specified as Entity, allowing for composed keys, or in other words multiple dimensions:

DateCountryStateProductionImportExport
2020-01-01USAFlorida21232300025000
2020-02-01USACalifornia32232400027000
2020-03-01USATexas54232200021000

Data Pipelines

Having a better detail of the structure of each dataset, allows for richer data manipulation scenarios, which we will be enabling as Pipelines:

Screen Shot 2021-08-13 at 2.57.48 PM.png

A Pipeline is triggered whenever a participating Dataset gets updated. Then, individual columns of each dataset are extracted as defined by the pipeline, data is transformed, aggregated or combined, and new Datasets can be created. This may sound like complex scenarios, but we are working to make it super simple to combine data from multiple datasets on our web portal.

Data marketplace

We are halfway to create our marketplace experience in the platform. You'll be able to sell access to your repositories by a subscription paywall. You can create selling products by combining your repositories and setting the billing cycle of the subscription (monthly or annually) On the marketplace page, you can search and filter the products to find the data that you need to complete your reports

Screen Shot 2021-08-13 at 15.32.26.png

Other minor features & bug fixing

JSON Download

Users can now download datasets as JSON both from the user interface and in the API.

The URL TO directly download the data is: https://www.alphacast.io/api/datasets/DATASET_ID/data.json?&api_key=YOUR_API_KEY

json_download.PNG

Move repositories from public to private

In the setting of the repositories users can now change whether the repo is public or private (that options used to be restricted to the moment of creation of the chart).

public repo.PNG

Share insights

Share.PNG

Luciano Cohan

Written by

Luciano Cohan

Co-Fundador de Alphacast. Ex Subsecretario de Programación Macroeconómica. Data Science. Creando una plataforma para el trabajo colaborativo en economías

Repo with the log of updates to the website and dataset.

Alphacast

Part of

Alphacast

Related insights