Skip to content

Commit

Permalink
added beta banner in docs
Browse files Browse the repository at this point in the history
  • Loading branch information
gdcsinaptik committed Jan 17, 2025
1 parent 4a3b377 commit aa7ee42
Show file tree
Hide file tree
Showing 19 changed files with 91 additions and 30 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,14 +27,14 @@ import pandasai as pai

pai.api_key.set("your-pai-api-key")

df = pai.read_csv("./filepath.csv")
file = pai.read_csv("./filepath.csv")

df = pai.create(path="your-organization/dataset-name",
df=df,
dataset = pai.create(path="your-organization/dataset-name",
df=file,
name="dataset-name",
description="dataset-description")

df.push()
dataset.push()
```
Your team can now access and query this data using natural language through the platform.

Expand Down
4 changes: 4 additions & 0 deletions docs/v3/agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'Agent'
description: 'Add few-shot learning to your PandaAI agent'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

You can train PandaAI to understand your data better and to improve its performance. Training is as easy as calling the `train` method on the `Agent`.


Expand Down
4 changes: 4 additions & 0 deletions docs/v3/ai-dashboards.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'AI Dashboards'
description: 'Turn your dataframes into collaborative AI dashboards'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

PandaAI provides a [data platform](https://app.pandabi.ai) that maximizes the power of your [semantic dataframes](/v3/dataframes).
With a single line of code, you can turn your dataframes into auto-updating AI dashboards - no UI development needed.
Each dashboard comes with a pre-generated set of insights and a conversational agent that helps you and your team explore the data through natural language.
Expand Down
8 changes: 7 additions & 1 deletion docs/v3/chat-and-cache.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@
title: "Chat and cache"
description: "Learn how to use PandaAI's powerful chat functionality for natural language data analysis and understand how caching improves performance"
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>


## Chat

The `.chat()` method is PandaAI's core feature that enables natural language interaction with your data. It allows you to:
Expand All @@ -18,7 +24,7 @@ import pandasai as pai

df_customers = pai.load("company/customers")

response = df.chat("Which are our top 5 customers?")
response = df_customers.chat("Which are our top 5 customers?")
```

### Chat with multiple DataFrames
Expand Down
4 changes: 4 additions & 0 deletions docs/v3/conversational-agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: "Conversational Agent"
description: "Learn how to customize and improve PandaAI's conversational capabilities"
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

## Custom Head

In some cases, you might want to provide custom data samples to the conversational agent to improve its understanding and responses. For example, you might want to:
Expand Down
9 changes: 7 additions & 2 deletions docs/v3/data-ingestion.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,11 @@ title: 'Add Data Sources'
description: 'Learn how to ingest data from various sources in PandaAI'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>


## What type of data does PandaAI support?
PandaAI mission is to make data analysis and manipulation more efficient and accessible to everyone. You can work with data in various ways:

Expand All @@ -21,13 +26,13 @@ Loading data from CSV files is straightforward with PandaAI:
import pandasai as pai

# Basic CSV loading
df = pai.read_csv("data.csv")
file = pai.read_csv("data.csv")

# Use the semantic layer on CSV
df = pai.create(
path="company/sales-data",
name="sales_data",
df = df,
df = file,
description="Sales data from our retail stores",
columns={
"transaction_id": {"type": "string", "description": "Unique identifier for each sale"},
Expand Down
5 changes: 5 additions & 0 deletions docs/v3/data-layer.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,11 @@ title: 'Data Layer'
description: 'Understanding the core data management components of PandaAI'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>


The Data Layer is built around a powerful [Semantic Layer](/v3/semantic-layer) that handles data processing and representation, enhancing the comprehension of tabular data from various [data sources](/v3/data-ingestion):
- CSV and Excel files
- SQL databases (PostgreSQL, MySQL)
Expand Down
15 changes: 9 additions & 6 deletions docs/v3/dataframes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,15 @@ When working with local files (CSV, Parquet) or datasets based on such files, th
- Ideal for local file processing or cross-source analysis

```python
import pandas as pd
from pandasai import SmartDataframe
import pandasai as pai

# Load local files as materialized dataframes
df = pd.read_csv("local_file.csv")
smart_df = SmartDataframe(df)
file= pai.read_csv("local_file.csv")

df = pai.create(path="organization/dataset-name",
name="dataset-name",
df = file,
description="describe your dataset")
```

## Virtualized Dataframes
Expand All @@ -31,7 +34,7 @@ When loading remote datasets, dataframes are virtualized by default, providing:
- Optimal for remote data sources

```python
from pandasai import load
import pandasai as pai

# Load remote datasets (virtualized by default)
df = load("organization/dataset-name")
df = pai.load("organization/dataset-name")
11 changes: 5 additions & 6 deletions docs/v3/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,10 @@ pai.api_key.set("YOUR_PANDABI_API_KEY")
import pandasai as pai

# read csv - replace "filepath" with your file path
df = pai.read_csv("filepath")
file = pai.read_csv("filepath")

# ask questions
df.chat('Which are the top 5 countries by sales?')
file.chat('Which are the top 5 countries by sales?')
```

When you ask a question, PandaAI will use the LLM to generate the answer and output a response.
Expand All @@ -58,11 +58,11 @@ This allows you to avoid reading the data every time.
import pandasai as pai

# read csv - replace "filepath" with your file path
df = pai.read_csv("filepath")
file = pai.read_csv("filepath")

df = pai.create(path="organization/dataset-name",
name="dataset-name",
df = df,
df = file,
description="describe your dataset")
```

Expand Down Expand Up @@ -107,5 +107,4 @@ df_customers = pai.load("company/customers")
df_orders = pai.load("company/orders")
df_products = pai.load("company/products")

response = pai.chat('Who are our top 5 customers and what products do they buy most frequently?', df_customers, df_orders, df_products)
```
response = pai.chat('Who are our top 5 customers and what products do they buy most frequently?', df_customers, df_orders, df_products)
4 changes: 4 additions & 0 deletions docs/v3/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'Introduction'
description: 'PandaAI is a Python library designed for end-to-end conversational data analysis.'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

## What is PandaAI?

PandaAI is a Python library that makes it easy to turn tabular datasets into conversational agents. It consists of a [Data Layer](/v3/data-layer) that handles data processing, transformation and semantic enhancement; and a [Natural Language Layer](/v3/overview-nl) that converts user queries into executable code, including charts generation.
Expand Down
4 changes: 4 additions & 0 deletions docs/v3/large-language-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: "Set up LLM"
description: "Set up Large Language Model in PandaAI"
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

PandaAI supports multiple LLMs.
To make the library lightweight, the default LLM is BambooLLM, developed by PandaAI team themselves.
To use other LLMs, you need to install the corresponding llm extension. Once a LLM extension is installed, you can configure it simply using `pai.config.set()`.
Expand Down
1 change: 0 additions & 1 deletion docs/v3/output-formats.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ The response format is automatically determined based on the type of analysis pe

Example:
```python
import pandas as pd
import pandasai as pai

df = pai.load("my-org/users")
Expand Down
4 changes: 4 additions & 0 deletions docs/v3/overview-nl.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'NL Layer'
description: 'Understanding the AI and natural language processing capabilities of PandaAI'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

## How does PandaAI NL Layer work?

The Natural Language Layer uses generative AI to transform natural language queries into production-ready code generated by LLMs.
Expand Down
4 changes: 4 additions & 0 deletions docs/v3/permission-management.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'Permission Management'
description: 'Manage access control and permissions'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

The [data platform](/v3/ai-dashboards) allows you to control how your AI dashboards and dataframes are shared and accessed.
You can choose between four levels of access:
- Private: for your own use
Expand Down
4 changes: 4 additions & 0 deletions docs/v3/privacy-and-security.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,8 @@ title: "Privacy and Security"
description: "Learn about PandaAI's privacy and security features"
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

PandaAI provides robust privacy and security features to protect your data and ensure compliance with security requirements.
20 changes: 10 additions & 10 deletions docs/v3/semantic-layer.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ The simplest way to create a semantic layer for CSV files is using the `create`
```python
import pandasai as pai

df = pai.read_csv("data.csv")
file = pai.read_csv("data.csv")

df = pai.create(
path="company/sales-data", # Format: "organization/dataset"
name="sales-data", # Human-readable name
df = df, # Input Dataframe
df = file, # Input Dataframe
description="Sales data from our retail stores", # Optional description
columns=[
{
Expand All @@ -48,7 +48,7 @@ df = pai.create(
The name field identifies your dataset in the create method.

```python
df = pai.read_csv("data.csv")
file = pai.read_csv("data.csv")

pai.create(
path="company/sales-data",
Expand All @@ -67,7 +67,7 @@ pai.create(
The path uniquely identifies your dataset in the PandaAI ecosystem using the format "organization/dataset".

```python
df = pai.read_csv("data.csv")
file = pai.read_csv("data.csv")

pai.create(
path="acme-corp/sales-data", # Format: "organization/dataset"
Expand All @@ -87,11 +87,11 @@ pai.create(
The input dataframe that contains your data, typically created using `pai.read_csv()`.

```python
df = pai.read_csv("data.csv") # Create the input dataframe
file = pai.read_csv("data.csv") # Create the input dataframe

pai.create(
path="acme-corp/sales-data",
df=df, # Pass your dataframe here
df=file, # Pass your dataframe here
...
)
```
Expand All @@ -105,12 +105,12 @@ pai.create(
A clear text description that helps others understand the dataset's contents and purpose.

```python
df = pai.read_csv("data.csv")
file = pai.read_csv("data.csv")

pai.create(
path="company/sales-data",
name="sales-data",
df = df,
df = file,
description="Daily sales transactions from all retail stores, including transaction IDs, dates, and amounts",
...
)
Expand All @@ -129,12 +129,12 @@ Define the structure and metadata of your dataset's columns to help PandaAI unde
When specified, only the declared columns will be included, allowing you to select specific columns for your semantic layer.

```python
df = pai.read_csv("data.csv")
file = pai.read_csv("data.csv")

pai.create(
path="company/sales-data",
name="sales-data",
df = df,
df = file,
description="Daily sales transactions from all retail stores",
columns=[
{
Expand Down
4 changes: 4 additions & 0 deletions docs/v3/share-dataframes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'Share Dataframes'
description: 'Learn how to push and pull dataframes to/from the PandaAI Data Platform'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

## Pushing Dataframes
Once you have turned raw data into dataframes using the [semantic layer](/v3/semantic-layer), you can push them to our data platform with one line of code.

Expand Down
4 changes: 4 additions & 0 deletions docs/v3/smart-dataframes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'SmartDataframe'
description: 'Legacy documentation for SmartDataframe class'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

## SmartDataframe (Legacy)

> **Note**: This documentation is for backwards compatibility. For new projects, we recommend using the new [semantic dataframes](/v3/dataframes).
Expand Down
4 changes: 4 additions & 0 deletions docs/v3/smart-datalakes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'SmartDatalake'
description: 'Legacy documentation for SmartDatalake class'
---

<Note title="Beta Notice">
Release v3 is currently in beta. This documentation reflects the features and functionality in progress and may change before the final release.
</Note>

## SmartDatalake (Legacy)

> **Note**: This documentation is for backwards compatibility. For new projects, we recommend using the new [semantic dataframes](/v3/dataframes) API with multiple dataframes.
Expand Down

0 comments on commit aa7ee42

Please sign in to comment.