Pipeline's Steps Library
Index
- Data Sources
- Dataset Transforms
- Basic Arithmetics
- Finance
- Publish data
Data Sources
Fetch Dataset: This step allows you to choose the dataset to which the sequence of pipeline steps can be added. This step is already predefined and is usually the first step of a pipeline Learn More
Yahoo Finance: Yahoo Finance has information on hundreds of thousands of financial assets, stocks, bonds, ETFs, Indices, which are now easily accessible using Alphacast pipelines. Learn More
Dataset Transforms
Select Columns: This step allows to filter columns and reduce the number of data to be processed in the following step. The variable selector will show the data up to that point in the pipeline. Learn More
Rename Columns: the user can choose the name for each column by filling in the blank space to the right, or keeping the original name by leaving the space blank. Learn More
Regroup Entities: this step allows the user to remove an entity and regroup the data by the remaining ones selecting the formula to deal with the aggregation. Similar to "Group By" in other data frameworks such as Pandas or SQL. Learn More
Change Frequency: Each dataset have a frequency. They can be daily, weekly, monthly, quarterly, or yearly, This step is used to change that frequency and recalculate the values of the variables.
With Change Frequency users can resample the time-frequency of the dataset moving from Daily to monthly, quarterly, and yearly back and forth. When doing so the user has to select the aggregation formula if moving from higher to a lower frequency - such as average or end of the period - or the interpolation formula if moving from lower to higher, such as linear interpolation or splines. Learn More
Merge with Dataset: user can combine datasets through common entities. You can choose which and how the entities will match and the type of matching i.e. if there is no entity match, they can keep the data from the first dataset, the second, or both datasets, or if there is a match, only that data will remain in the pipeline Learn More
Basic Arithmetics
Calculate Variables: here the user will be able to perform different types of arithmetic, trigonometric and other functions on the variables of the previously selected dataset. For this, it is necessary to mention the name of the variable with the symbol @ and then type the name of the column.
Learn more here Full list of Available Functions
Apply Transform: In this step, the user will be able to make changes to the original data depending on the frequency of the data and combine the different transformations. Some of them make it possible to change the data from current prices to constant prices, deseasonalize time series, change from local currency to dollars, or even express them as a percentage of GDP or per capita. Variations can also be made month over month, year over year, sum of 12 months, variations a year ago and many more
Financial Analysis
Technical Analysis: With this step* users can estimate 130 metrics for Technical Analysis of financial assets, Metrics include a number of cycles, momentum, volatility and volume Indicators, standard overlap studies, patterns recognition techniques, or statistic functions.Learn More
Debt Sustainability Analysis*. Alphacast has integrated into it's platform a Debt Sustainability Analysis (DSA) Tool. Basically, this tool can help you forecast the Debt/GDP ratio of a country, taking into account specific characteristics such as growth, inflation, interest rates on domestic and foreign debt, exchange rate depreciation and primary surplus. The tool also applies shocks to generate a fan-chart style chart so it can account for uncertainty into the forecasts. Click here to learn More
Porfolio Stats and Tear Sheets Calculator. Alphacast pipelines can be used to design and test portfolio and trading strategies. With the "Porfolio Analysis" Step on the pipeline editor you can create tear-sheets from daily returns and also dynamic rolling stats for different timeframes. Learn more Here
Publish data
Publish to Dataset: allows the user to choose the name of the dataset and the repository where it will be stored