Skip to content

Latest commit

 

History

History
32 lines (20 loc) · 1.06 KB

2.md

File metadata and controls

32 lines (20 loc) · 1.06 KB

The CodeFlare Stack - Scenario 2

Pre-Train a RoBERTa Language Model from Pre-tokenized Data (Using Demo Data)

RoBERTa is a robustly optimized method for pretraining natural language processing (NLP) systems.

Goals: Learning about CodeFlare
You Provide: nothing, it just works!
CodeFlare Stack Provides: S3 data | Ray cluster | Kubernetes management | Distributed training job | Pop-up Dashboards


To start:

codeflare ml/codeflare/training/roberta/demo

CLI In Action

You may run the CodeFlare RoBERTa model architecture against sample data, as we have done in these recordings:

Pop-up CodeFlare Dashboard In Action

codeflare-scenario-2.mp4

Back to Top