Public repository for custom blocks for Omniscope Evo.
- Design your custom block in Omniscope Evo 2020.1 or later. The source code should be reasonably documented and potentially contain sections to describe input fields and parameters.
- Export as a ZIP file from the block dialog.
- Send the file to support@visokio.com and we will include it for you.
- Follow points 1-2 from the simple way.
- Fork the repository.
- Create or use a directory in the forked repository under one of the main sections that specifies the general area of what the block does.
- Extract the ZIP file into this directory.
- Consider adding a README.md for convenience, and a thumbnail.png.
- Run the python scripts create_index.py and create_readme.py located in the root of the repository.
- Create a pull request.
- Analytics
- Clustering
- Network Analysis
- Prediction
- Validation
- Website
- Data Profiler
- Survival Analysis
- Preparation
- ForEach
- Geo
- Interfaces
- JSON
- Join
- Partition
- Pivot
- Standardisation
- Workflow
- Time Duration Unit Converter
- Markdown to HTML
- Smart Schema Normaliser
- Split Address
- Expand Date Fields
- Data Quality Analyser
- Set Project Parameters
- Add row ID field
- Anonymise
- Unstack Records
- Conditional Execution
- URL Encode
- Streaming Field Renamer
- Canonical Schema Mapper
- Smart Date Parser
- Unescape HTML
- Field Renamer
- Sort fields
- Centroids from GeoJSON
- Connectors
- Azure
- Flightstats
- Overpass
- Slack
- Weather
- Trello
- Yahoo Finance
- Etherscan
- Dune
- HubSpot
- Jira
- Slim CD Transaction
- Google BigQuery Custom SQL
- Google BigQuery Import Table
- XPT Reader
- AirTable
- Flipside
- Code & AI
- Outputs
- API
- BigQuery
- Github
- Messenger
- PowerPoint
- Slack
- Send Report to Slack
- Inputs
Performs DBScan clustering on the first input data provided. The output consists of the original input with a Cluster field appended. If a second input is available, it will be used as output instead.
Performs KMeans clustering on the first input data provided. The output consists of the original input with a Cluster field appended. If a second input is available, it will be used as output instead.
Performs GMM clustering on the first input data provided. The output consists of the original input with a Cluster field appended. If a second input is available, it will be used as output instead.
Given a dataset in which each record represents an edge between two nodes of a network, and each node has an associated categorical attribute, the block analyses connections between attributes, based on connections between associated nodes. The result of the analysis is a list of records in which each record specifies a connection from one attribute to another. The connection contains a probability field, which gives an answer to the question that if a node has the specified categorical attribute, how probable it is that it has a connection to another node with the linked categorical attribute.
Given a dataset in which each record represents an edge between two nodes of a network, the block will project all the nodes onto a (e.g. 2)- dimensional plane in such a way that nodes which share many connections are close together, and nodes that do not share many connections are far apart.
Performs k-nearest-neighbour prediction on the data. The prediction for a new point depends on the k-nearest-neighbours around the point. The majority class is used as the prediction.
Predicts classes of new data from old data by drawing a boundary between two classes whereas the margin around the bondary is made as large as possible to avoid touching the points.
Computes a confusion matrix as well as model validation statistics
Extracts the structure and content of a website and its pages.
Provides detailed statistics about a dataset
Computes an estimate of a survival curve for truncated and/or censored data using the Kaplan-Meier or Fleming-Harrington method
The ForEach multi stage block allows to orchestrate the execution of another Omniscope project and running the workflow multiple times, each time with a different set of parameter values. Unlike the ForEach block allows multiple stages of execution, executing/refreshing from source a different set of blocks in each stage.
None
Converts gridsquare / Maidenhead
Match regions in shapefile with geographical points having latitude and longitude
Intefaces with kedro workflows
Expands JSON strings in a specified field into separate columns, optionally including the original input data
Normalise semi-structured JSON strings into a flat table, appending data record by record.
Performs a join between values in the first input and intervals in the second input. Rows are joined if the value is contained in an interval.
Performs a join between the first (left) and second (right) input. The field on which the join is performed must be text containing multiple terms. The result will contain joined records based on how many terms they share, weighted by inverse document frequency.
Performs a join between the first (left) and second (right) input. The join can be performed using equality/inequality comparators ==, <=, >=, <, > , which means the result will be a constraint cartesian join including all records that match the inequalities.
Partitions the data into chunks of the desired size. There will be a new field called "Partition" which contains a number unique to each partition.
Keep all selected fixed fields in the output, de-pivot all other fields
Standardises the values in the selected fields so that they are in the range between 0 and 1. I.e. The new value of the highest value in each field is going to be 1, and the lowest value 0. All other values are scaled proportionally.
Executes another Omniscope project multiple times, each time with a different set of parameter values.
Converts specified datetime, time duration, or time string fields into a chosen time unit
None
Automatically cleans and stabilises incoming datasets by normalising field names, merging duplicates, and inferring data types, with zero configuration.
Splits an address field into streetname, streetnumber, and suffix.
Expands selected date fields into separate year, month, day, and other date-part fields for easier analysis
Checks datasets for common quality issues and outputs detailed issue logs and cleaned, annotated data.
The block updates project parameters using the input data
Adds a Row ID field with a sequential number.
Anonymise sensitive text data within the input data.
Unstack all records by splitting on text fields with stacked values, filling records with empty strings where needed.
This block conditionally triggers the execution of specified workflow blocks via the Workflow API, running them only when the Conditional Parameter is set to true.
URL encode strings in a field using the UTF-8 encoding scheme
Renames the fields of a input data, optimised for streaming and big data, given a set of rules defined in a CSV file
Enforces a stable, business-defined schema by mapping aliases to canonical fields, coalescing values, and applying consistent types and defaults.
Automatically parse date fields with mixed or unknown formats into ISO datetime
Convert all named and numeric character references to the corresponding Unicode characters
Renames the fields of a data set given a list of current names and new names.
Sort fields in the input data by name or type
Calculates the centroid points (lat,long) and output them together with the shape ID from a specified GeoJSON file.
Storage Gen2 Blob connector to load a CSV or Parquet blob/file in Omniscope.
Downloads a list of airports as provided by flightstats (https://www.flightstats.com). The script needs your flightstats app id and key which needs to be obtained either through buying their service or signing up for a test account.
Requests information about flights specified in the input data from flightstats (https://www.flightstats.com). If the flight exists the result will contain live information, otherwise it will not be part of it. The script needs your flightstats app id and key which needs to be obtained either through buying their service or signing up for a test account.
Downloads a list of airlines as provided by flightstats (https://www.flightstats.com). The script needs your flightstats app id and key which needs to be obtained either through buying their service or signing up for a test account.
Finds all matching streets given a street name and requests multiple coordinates along the street using data from Overpass API. It will create a row for each point found that is part of a street that matches the given street name. The resulting rows will include the street name, the street Id and the coordinates of the point. The script needs an input with a field with the street name.
Allows you to call public Slack endpoints.
Retrieves current weather and forecasts from OpenWeatherMap
Retrieves boards, lists and cards, and allows you to search in Trello.
Fetches price data for tickers from Yahoo Finance
The Ethereum Blockchain Explorer.
Execute queries and retrieve blockchain data from any public query on dune.com, as well as any personal private queries your Dune account has access to
Retrieves contacts, companies, deals and lists
Retrieves projects and issues from Jira
Pull Slim CD gateway transactions
Executes a SQL query on Google BigQuery and imports the query results
Allows to import a table from Google BigQuery.
Reads a SAS Transport xpt file, extracting a dataset.
Retrieve records from an AirTable table using the REST API
Executes a SQL query on Flipside and retrieves the blockchain data
Executes a one-off prompt to Google Gemini and returns the generated text result
Executes a one-off prompt to OpenAI GPT and returns the generated text result
Executes a one-off prompt to a local LLM and returns the generated text result
Executes a one-off prompt to Anthropic Claude and returns the generated text result
Executes a one-off prompt to DeepSeek and returns the generated text result
Execute a system command.
Uploads files from a path column to a remote HTTP endpoint using multipart/form-data.
Allows to write data to a Google BigQuery table. The table can be created/replaced, or records can be appended to an existing table
Reads from and writes data to GitHub
Sends data to Telegram
Grabs screenshots of webpages, optionally producing a PDF document.
Append multiple PDF files combining them into one PDF file.
Prints Report tabs to PDF files for each record of the input data.
Prints Report tabs to PDF files for each record of the input data.
Export a Report to a PowerPoint pptx file
Posts messages on a channel.
Send a Report link with screenshots and optional PDF to a Slack channel with a message
A connector for MongoDB
Executes a SQL query on a Snowflake database.
Reads multiple rds files either from an upstream block, or a folder, and appends them
Joins regions defined in a shapefile with points defined as latitudes and longitudes, and gives meta information about the content of the shapefile