Friday Update - August 13th
Hi everybody! Another amazing week with lots of news. We had another rush of new users, the highest since June's launch, and with that feedback is flowing faster than ever!
Webinars
Last Wednesday we began the 4-sessions open webinar on "Economic Analysis Automatization" where we explore basic charts and dashboards creation, data uploading, excel integration, and more. To subscribe to next week's Webinar click here. If you are already subscribed you don't have to do it again.
Minami
We began our three-week sprint with the super talented design studio MINAMI. They are pushing us to think out of the box on how to implement a full overhaul of the User Experience. The final objective is to let the user have a smoother experience in every aspect and feature of the platform and make a simpler discovery and learning process. Great new ideas are showing up.
Pîpelines, Data Ingestion & Data structure
Data Ingestion
The current version of our API, allows for datasets to be uploaded into our platform. However, we used a different internal process for ingesting data from public sources, which allows us to perform data transformations, aggregation, and further manipulation. We are now isolating such processes allowing for data to be "Adapted" into the final Dataset. As we do that, we are also storing every raw dataset version, and keeping track of every difference.
Data Structure
This new data ingestion platform also allows working with data that has multiple dimensions. In the past, we would only accept Date, and a general Entity dimension, that were used across different Variables in the Dataset:
Date | Entity | Production | Imports | Exports |
---|---|---|---|---|
2020-01-01 | USA | 2123 | 23000 | 25000 |
2020-02-01 | Canada | 3223 | 24000 | 27000 |
2020-03-01 | Mexico | 5423 | 22000 | 21000 |
Our API now works with "Columns" rather than Variables, and each column can be specified as Entity, allowing for composed keys, or in other words multiple dimensions:
Date | Country | State | Production | Import | Export |
---|---|---|---|---|---|
2020-01-01 | USA | Florida | 2123 | 23000 | 25000 |
2020-02-01 | USA | California | 3223 | 24000 | 27000 |
2020-03-01 | USA | Texas | 5423 | 22000 | 21000 |
Data Pipelines
Having a better detail of the structure of each dataset, allows for richer data manipulation scenarios, which we will be enabling as Pipelines:
A Pipeline is triggered whenever a participating Dataset gets updated. Then, individual columns of each dataset are extracted as defined by the pipeline, data is transformed, aggregated or combined, and new Datasets can be created. This may sound like complex scenarios, but we are working to make it super simple to combine data from multiple datasets on our web portal.
Data marketplace
We are halfway to create our marketplace experience in the platform. You'll be able to sell access to your repositories by a subscription paywall. You can create selling products by combining your repositories and setting the billing cycle of the subscription (monthly or annually) On the marketplace page, you can search and filter the products to find the data that you need to complete your reports
Other minor features & bug fixing
JSON Download
Users can now download datasets as JSON both from the user interface and in the API.
The URL TO directly download the data is: https://www.alphacast.io/api/datasets/DATASET_ID/data.json?&api_key=YOUR_API_KEY
Move repositories from public to private
In the setting of the repositories users can now change whether the repo is public or private (that options used to be restricted to the moment of creation of the chart).