diff --git a/README.md b/README.md
index d4807c441..a65c0d72c 100644
--- a/README.md
+++ b/README.md
@@ -27,14 +27,14 @@ import pandasai as pai
pai.api_key.set("your-pai-api-key")
-df = pai.read_csv("./filepath.csv")
+file = pai.read_csv("./filepath.csv")
-df = pai.create(path="your-organization/dataset-name",
- df=df,
+dataset = pai.create(path="your-organization/dataset-name",
+ df=file,
name="dataset-name",
description="dataset-description")
-df.push()
+dataset.push()
```
Your team can now access and query this data using natural language through the platform.
diff --git a/docs/v3/agent.mdx b/docs/v3/agent.mdx
index 7e9fa871e..85cee379d 100644
--- a/docs/v3/agent.mdx
+++ b/docs/v3/agent.mdx
@@ -3,6 +3,10 @@ title: 'Agent'
description: 'Add few-shot learning to your PandaAI agent'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
You can train PandaAI to understand your data better and to improve its performance. Training is as easy as calling the `train` method on the `Agent`.
diff --git a/docs/v3/ai-dashboards.mdx b/docs/v3/ai-dashboards.mdx
index 3c869aae3..601b9073f 100644
--- a/docs/v3/ai-dashboards.mdx
+++ b/docs/v3/ai-dashboards.mdx
@@ -3,6 +3,10 @@ title: 'AI Dashboards'
description: 'Turn your dataframes into collaborative AI dashboards'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
PandaAI provides a [data platform](https://app.pandabi.ai) that maximizes the power of your [semantic dataframes](/v3/dataframes).
With a single line of code, you can turn your dataframes into auto-updating AI dashboards - no UI development needed.
Each dashboard comes with a pre-generated set of insights and a conversational agent that helps you and your team explore the data through natural language.
diff --git a/docs/v3/chat-and-cache.mdx b/docs/v3/chat-and-cache.mdx
index d85df01d0..2cb7298af 100644
--- a/docs/v3/chat-and-cache.mdx
+++ b/docs/v3/chat-and-cache.mdx
@@ -2,6 +2,12 @@
title: "Chat and cache"
description: "Learn how to use PandaAI's powerful chat functionality for natural language data analysis and understand how caching improves performance"
---
+
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
+
## Chat
The `.chat()` method is PandaAI's core feature that enables natural language interaction with your data. It allows you to:
@@ -18,7 +24,7 @@ import pandasai as pai
df_customers = pai.load("company/customers")
-response = df.chat("Which are our top 5 customers?")
+response = df_customers.chat("Which are our top 5 customers?")
```
### Chat with multiple DataFrames
diff --git a/docs/v3/conversational-agent.mdx b/docs/v3/conversational-agent.mdx
index 63f197008..33d452af1 100644
--- a/docs/v3/conversational-agent.mdx
+++ b/docs/v3/conversational-agent.mdx
@@ -3,6 +3,10 @@ title: "Conversational Agent"
description: "Learn how to customize and improve PandaAI's conversational capabilities"
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
## Custom Head
In some cases, you might want to provide custom data samples to the conversational agent to improve its understanding and responses. For example, you might want to:
diff --git a/docs/v3/data-ingestion.mdx b/docs/v3/data-ingestion.mdx
index e5cad6589..cc942d28c 100644
--- a/docs/v3/data-ingestion.mdx
+++ b/docs/v3/data-ingestion.mdx
@@ -3,6 +3,11 @@ title: 'Add Data Sources'
description: 'Learn how to ingest data from various sources in PandaAI'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
+
## What type of data does PandaAI support?
PandaAI mission is to make data analysis and manipulation more efficient and accessible to everyone. You can work with data in various ways:
@@ -21,13 +26,13 @@ Loading data from CSV files is straightforward with PandaAI:
import pandasai as pai
# Basic CSV loading
-df = pai.read_csv("data.csv")
+file = pai.read_csv("data.csv")
# Use the semantic layer on CSV
df = pai.create(
path="company/sales-data",
name="sales_data",
- df = df,
+ df = file,
description="Sales data from our retail stores",
columns={
"transaction_id": {"type": "string", "description": "Unique identifier for each sale"},
diff --git a/docs/v3/data-layer.mdx b/docs/v3/data-layer.mdx
index bd71f965c..862758bf7 100644
--- a/docs/v3/data-layer.mdx
+++ b/docs/v3/data-layer.mdx
@@ -3,6 +3,11 @@ title: 'Data Layer'
description: 'Understanding the core data management components of PandaAI'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
+
The Data Layer is built around a powerful [Semantic Layer](/v3/semantic-layer) that handles data processing and representation, enhancing the comprehension of tabular data from various [data sources](/v3/data-ingestion):
- CSV and Excel files
- SQL databases (PostgreSQL, MySQL)
diff --git a/docs/v3/dataframes.mdx b/docs/v3/dataframes.mdx
index 7e7b47752..f90f5ad0e 100644
--- a/docs/v3/dataframes.mdx
+++ b/docs/v3/dataframes.mdx
@@ -15,12 +15,15 @@ When working with local files (CSV, Parquet) or datasets based on such files, th
- Ideal for local file processing or cross-source analysis
```python
-import pandas as pd
-from pandasai import SmartDataframe
+import pandasai as pai
# Load local files as materialized dataframes
-df = pd.read_csv("local_file.csv")
-smart_df = SmartDataframe(df)
+file= pai.read_csv("local_file.csv")
+
+df = pai.create(path="organization/dataset-name",
+ name="dataset-name",
+ df = file,
+ description="describe your dataset")
```
## Virtualized Dataframes
@@ -31,7 +34,7 @@ When loading remote datasets, dataframes are virtualized by default, providing:
- Optimal for remote data sources
```python
-from pandasai import load
+import pandasai as pai
# Load remote datasets (virtualized by default)
-df = load("organization/dataset-name")
\ No newline at end of file
+df = pai.load("organization/dataset-name")
\ No newline at end of file
diff --git a/docs/v3/getting-started.mdx b/docs/v3/getting-started.mdx
index 0a48ec159..c6ab0c087 100644
--- a/docs/v3/getting-started.mdx
+++ b/docs/v3/getting-started.mdx
@@ -34,10 +34,10 @@ pai.api_key.set("YOUR_PANDABI_API_KEY")
import pandasai as pai
# read csv - replace "filepath" with your file path
-df = pai.read_csv("filepath")
+file = pai.read_csv("filepath")
# ask questions
-df.chat('Which are the top 5 countries by sales?')
+file.chat('Which are the top 5 countries by sales?')
```
When you ask a question, PandaAI will use the LLM to generate the answer and output a response.
@@ -58,11 +58,11 @@ This allows you to avoid reading the data every time.
import pandasai as pai
# read csv - replace "filepath" with your file path
-df = pai.read_csv("filepath")
+file = pai.read_csv("filepath")
df = pai.create(path="organization/dataset-name",
name="dataset-name",
- df = df,
+ df = file,
description="describe your dataset")
```
@@ -107,5 +107,4 @@ df_customers = pai.load("company/customers")
df_orders = pai.load("company/orders")
df_products = pai.load("company/products")
-response = pai.chat('Who are our top 5 customers and what products do they buy most frequently?', df_customers, df_orders, df_products)
-```
\ No newline at end of file
+response = pai.chat('Who are our top 5 customers and what products do they buy most frequently?', df_customers, df_orders, df_products)
\ No newline at end of file
diff --git a/docs/v3/introduction.mdx b/docs/v3/introduction.mdx
index 3f540a42c..ae3394fa0 100644
--- a/docs/v3/introduction.mdx
+++ b/docs/v3/introduction.mdx
@@ -3,6 +3,10 @@ title: 'Introduction'
description: 'PandaAI is a Python library designed for end-to-end conversational data analysis.'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
## What is PandaAI?
PandaAI is a Python library that makes it easy to turn tabular datasets into conversational agents. It consists of a [Data Layer](/v3/data-layer) that handles data processing, transformation and semantic enhancement; and a [Natural Language Layer](/v3/overview-nl) that converts user queries into executable code, including charts generation.
diff --git a/docs/v3/large-language-models.mdx b/docs/v3/large-language-models.mdx
index 5af6da45e..85110aaff 100644
--- a/docs/v3/large-language-models.mdx
+++ b/docs/v3/large-language-models.mdx
@@ -3,6 +3,10 @@ title: "Set up LLM"
description: "Set up Large Language Model in PandaAI"
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
PandaAI supports multiple LLMs.
To make the library lightweight, the default LLM is BambooLLM, developed by PandaAI team themselves.
To use other LLMs, you need to install the corresponding llm extension. Once a LLM extension is installed, you can configure it simply using `pai.config.set()`.
diff --git a/docs/v3/output-formats.mdx b/docs/v3/output-formats.mdx
index ca3bb9acc..aea083c47 100644
--- a/docs/v3/output-formats.mdx
+++ b/docs/v3/output-formats.mdx
@@ -28,7 +28,6 @@ The response format is automatically determined based on the type of analysis pe
Example:
```python
-import pandas as pd
import pandasai as pai
df = pai.load("my-org/users")
diff --git a/docs/v3/overview-nl.mdx b/docs/v3/overview-nl.mdx
index 47b0eb484..1b8ee1713 100644
--- a/docs/v3/overview-nl.mdx
+++ b/docs/v3/overview-nl.mdx
@@ -3,6 +3,10 @@ title: 'NL Layer'
description: 'Understanding the AI and natural language processing capabilities of PandaAI'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
## How does PandaAI NL Layer work?
The Natural Language Layer uses generative AI to transform natural language queries into production-ready code generated by LLMs.
diff --git a/docs/v3/permission-management.mdx b/docs/v3/permission-management.mdx
index 5a7a31ae0..cc35fbabb 100644
--- a/docs/v3/permission-management.mdx
+++ b/docs/v3/permission-management.mdx
@@ -3,6 +3,10 @@ title: 'Permission Management'
description: 'Manage access control and permissions'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
The [data platform](/v3/ai-dashboards) allows you to control how your AI dashboards and dataframes are shared and accessed.
You can choose between four levels of access:
- Private: for your own use
diff --git a/docs/v3/privacy-and-security.mdx b/docs/v3/privacy-and-security.mdx
index e67115326..4931df737 100644
--- a/docs/v3/privacy-and-security.mdx
+++ b/docs/v3/privacy-and-security.mdx
@@ -3,4 +3,8 @@ title: "Privacy and Security"
description: "Learn about PandaAI's privacy and security features"
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
PandaAI provides robust privacy and security features to protect your data and ensure compliance with security requirements.
diff --git a/docs/v3/semantic-layer.mdx b/docs/v3/semantic-layer.mdx
index e27dc03ba..1b20d7131 100644
--- a/docs/v3/semantic-layer.mdx
+++ b/docs/v3/semantic-layer.mdx
@@ -21,12 +21,12 @@ The simplest way to create a semantic layer for CSV files is using the `create`
```python
import pandasai as pai
-df = pai.read_csv("data.csv")
+file = pai.read_csv("data.csv")
df = pai.create(
path="company/sales-data", # Format: "organization/dataset"
name="sales-data", # Human-readable name
- df = df, # Input Dataframe
+ df = file, # Input Dataframe
description="Sales data from our retail stores", # Optional description
columns=[
{
@@ -48,7 +48,7 @@ df = pai.create(
The name field identifies your dataset in the create method.
```python
-df = pai.read_csv("data.csv")
+file = pai.read_csv("data.csv")
pai.create(
path="company/sales-data",
@@ -67,7 +67,7 @@ pai.create(
The path uniquely identifies your dataset in the PandaAI ecosystem using the format "organization/dataset".
```python
-df = pai.read_csv("data.csv")
+file = pai.read_csv("data.csv")
pai.create(
path="acme-corp/sales-data", # Format: "organization/dataset"
@@ -87,11 +87,11 @@ pai.create(
The input dataframe that contains your data, typically created using `pai.read_csv()`.
```python
-df = pai.read_csv("data.csv") # Create the input dataframe
+file = pai.read_csv("data.csv") # Create the input dataframe
pai.create(
path="acme-corp/sales-data",
- df=df, # Pass your dataframe here
+ df=file, # Pass your dataframe here
...
)
```
@@ -105,12 +105,12 @@ pai.create(
A clear text description that helps others understand the dataset's contents and purpose.
```python
-df = pai.read_csv("data.csv")
+file = pai.read_csv("data.csv")
pai.create(
path="company/sales-data",
name="sales-data",
- df = df,
+ df = file,
description="Daily sales transactions from all retail stores, including transaction IDs, dates, and amounts",
...
)
@@ -129,12 +129,12 @@ Define the structure and metadata of your dataset's columns to help PandaAI unde
When specified, only the declared columns will be included, allowing you to select specific columns for your semantic layer.
```python
-df = pai.read_csv("data.csv")
+file = pai.read_csv("data.csv")
pai.create(
path="company/sales-data",
name="sales-data",
- df = df,
+ df = file,
description="Daily sales transactions from all retail stores",
columns=[
{
diff --git a/docs/v3/share-dataframes.mdx b/docs/v3/share-dataframes.mdx
index 9784f24f2..095b87854 100644
--- a/docs/v3/share-dataframes.mdx
+++ b/docs/v3/share-dataframes.mdx
@@ -3,6 +3,10 @@ title: 'Share Dataframes'
description: 'Learn how to push and pull dataframes to/from the PandaAI Data Platform'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
## Pushing Dataframes
Once you have turned raw data into dataframes using the [semantic layer](/v3/semantic-layer), you can push them to our data platform with one line of code.
diff --git a/docs/v3/smart-dataframes.mdx b/docs/v3/smart-dataframes.mdx
index 72c190db0..9d3026e6e 100644
--- a/docs/v3/smart-dataframes.mdx
+++ b/docs/v3/smart-dataframes.mdx
@@ -3,6 +3,10 @@ title: 'SmartDataframe'
description: 'Legacy documentation for SmartDataframe class'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
## SmartDataframe (Legacy)
> **Note**: This documentation is for backwards compatibility. For new projects, we recommend using the new [semantic dataframes](/v3/dataframes).
diff --git a/docs/v3/smart-datalakes.mdx b/docs/v3/smart-datalakes.mdx
index 12a0ed51d..bee1b720a 100644
--- a/docs/v3/smart-datalakes.mdx
+++ b/docs/v3/smart-datalakes.mdx
@@ -3,6 +3,10 @@ title: 'SmartDatalake'
description: 'Legacy documentation for SmartDatalake class'
---
+
+Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
+
+
## SmartDatalake (Legacy)
> **Note**: This documentation is for backwards compatibility. For new projects, we recommend using the new [semantic dataframes](/v3/dataframes) API with multiple dataframes.