Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
TristenHarr committed Aug 22, 2024
1 parent cc64056 commit ef4b124
Show file tree
Hide file tree
Showing 7 changed files with 47 additions and 113 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ This changelog documents changes between release tags.
## [Unreleased]
Upcoming changes for the next versioned release.

## [0.1.0]
* Update Documentation for ndc-hub

## [0.0.32]
* Fix automated workflow

Expand Down
118 changes: 31 additions & 87 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ The Hasura Turso Connector allows for connecting to a LibSQL/SQLite database or

This connector is built using the [Typescript Data Connector SDK](https://github.com/hasura/ndc-sdk-typescript) and implements the [Data Connector Spec](https://github.com/hasura/ndc-spec).

* [Connector information in the Hasura Hub](https://hasura.io/connectors/turso)
* [Hasura V3 Documentation](https://hasura.io/docs/3.0/index/)
- [See the listing in the Hasura Hub](https://hasura.io/connectors/turso)
- [Hasura V3 Documentation](https://hasura.io/docs/3.0/index/)

## Features

Expand All @@ -35,8 +35,6 @@ Below, you'll find a matrix of all supported features for the Turso connector:

## Before you get Started

[Prerequisites or recommended steps before using the connector.]

1. The [DDN CLI](https://hasura.io/docs/3.0/cli/installation) and [Docker](https://docs.docker.com/engine/install/) installed
2. A [supergraph](https://hasura.io/docs/3.0/getting-started/init-supergraph)
3. A [subgraph](https://hasura.io/docs/3.0/getting-started/init-subgraph)
Expand All @@ -53,114 +51,60 @@ connector — after it's been configured — [here](https://hasura.io/docs/3.0/g
ddn auth login
```

### Step 2: Initialize the connector

```bash
ddn connector init turso --subgraph my_subgraph --hub-connector hasura/turso
```

In the snippet above, we've used the subgraph `my_subgraph` as an example; however, you should change this
value to match any subgraph which you've created in your project.

### Step 3: Modify the connector's port

When you initialized your connector, the CLI generated a set of configuration files, including a Docker Compose file for
the connector. Typically, connectors default to port `8080`. Each time you add a connector, we recommend incrementing the published port by one to avoid port collisions.

As an example, if your connector's configuration is in `my_subgraph/connector/turso/docker-compose.turso.yaml`, you can modify the published port to reflect a value that isn't currently being used by any other connectors:

```yaml
ports:
- mode: ingress
target: 8080
published: "8082"
protocol: tcp
```
### Step 4: Add environment variables
Now that our connector has been scaffolded out for us, we need to provide a connection string so that the data source can be introspected and the boilerplate configuration can be taken care of by the CLI.
The CLI has provided an `.env.local` file for our connector in the `my_subgraph/connector/turso` directory. We can add a key-value pair
of `TURSO_URL` along with the connection string itself to this file, and our connector will use this to connect to our database. If you are connecting to a cloud hosted Turso database you can also provide the environment variable for the `TURSO_AUTH_TOKEN` which allows the connector to authenticate.


The file, after adding the `TURSO_URL`, should look like this example if connecting to a Turso hosted database instance:

```env
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://local.hasura.dev:4317
OTEL_SERVICE_NAME=my_subgraph_turso
TURSO_URL=libsql://chinook-tristenharr.turso.io
TURSO_AUTH_TOKEN=eyJhb...
```

To connect to a local SQLite file, you can add the persistent SQLite database file into the `my_subgraph/connector/turso` directory, and since all files in this directory will get mounted to the container at `/etc/connector/` you can then point the `TURSO_URL` to the local file. Assuming that the Turso file was named `chinook.sqlite` the file should look like this example:

```env
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://local.hasura.dev:4317
OTEL_SERVICE_NAME=my_subgraph_turso
TURSO_URL=file:/etc/connector/chinook.sqlite
```

### Step 5: Introspect your data source
### Step 2: Configure the connector

With the connector configured, we can now use the CLI to introspect our database and create a source-specific configuration file for our connector.
Once you have an initialized supergraph and subgraph, run the initialization command in interactive mode while providing a name for the connector in the prompt:

```bash
ddn connector introspect --connector my_subgraph/connector/turso/connector.yaml
ddn connector init turso -i
```

### Step 6. Create the Hasura metadata
#### Step 2.1: Choose the `hasura/turso` option from the list

Hasura DDN uses a concept called "connector linking" to take [NDC-compliant](https://github.com/hasura/ndc-spec)
configuration JSON files for a data connector and transform them into an `hml` (Hasura Metadata Language) file as a
[`DataConnectorLink` metadata object](https://hasura.io/docs/3.0/supergraph-modeling/data-connectors#dataconnectorlink-dataconnectorlink).
#### Step 2.2: Choose a port for the connector

Basically, metadata objects in `hml` files define our API.
The CLI will ask for a specific port to run the connector on. Choose a port that is not already in use or use the default suggested port.

First we need to create this `hml` file with the `connector-link add` command and then convert our configuration files
into `hml` syntax and add it to this file with the `connector-link update` command.
#### Step 2.3: Provide the env var(s) for the connector

Let's name the `hml` file the same as our connector, `turso`:
| Name | Description |
|-|-|
| TURSO_URL | The connection string for the Turso database, or the file path to the SQLite file |
| TURSO_AUTH_TOKEN | The turso auth token |

```bash
ddn connector-link add turso --subgraph my_subgraph
```

The new file is scaffolded out at `my_subgraph/metadata/turso/turso.hml`.

### Step 7. Update the environment variables
You'll find the environment variables in the `.env` file and they will be in the format:

The generated file has two environment variables — one for reads and one for writes — that you'll need to add to your subgraph's `.env.my_subgraph` file. Each key is prefixed by the subgraph name, an underscore, and the name of the connector. Ensure the port value matches what is published in your connector's docker compose file.
`<SUBGRAPH_NAME>_<CONNECTOR_NAME>_<VARIABLE_NAME>`

As an example:
Here is an example of what your `.env` file might look like:

```env
MY_SUBGRAPH_TURSO_READ_URL=http://local.hasura.dev:<port>
MY_SUBGRAPH_TURSO_WRITE_URL=http://local.hasura.dev:<port>
```
APP_TURSO_AUTHORIZATION_HEADER="Bearer QTJ7rl19SvKa0rwOZjYILQ=="
APP_TURSO_HASURA_SERVICE_TOKEN_SECRET="QTJ7rl19SvKa0rwOZjYILQ=="
APP_TURSO_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://local.hasura.dev:4317"
APP_TURSO_OTEL_SERVICE_NAME="app_turso"
APP_TURSO_READ_URL="http://local.hasura.dev:4362"
APP_TURSO_TURSO_AUTH_TOKEN="eyJ..."
APP_TURSO_TURSO_URL="libsql://chinook-tristenharr.turso.io"
APP_TURSO_WRITE_URL="http://local.hasura.dev:4362"
```

These values are for the connector itself and utilize `local.hasura.dev` to ensure proper resolution within the docker container.

### Step 8. Start the connector's Docker Compose
### Step 3: Introspect the connector

Let's start our connector's Docker Compose file by running the following from inside the connector's subgraph:
Introspecting the connector will generate a `config.json` file and a `turso.hml` file.

```bash
docker compose -f docker-compose.turso.yaml up
ddn connector introspect turso
```

### Step 9. Update the new `DataConnectorLink` object
### Step 4: Add your resources

Finally, now that our `DataConnectorLink` has the correct environment variables configured for the connector, we can run the update command to have the CLI look at the configuration JSON and transform it to reflect our database's schema in `hml` format. In a new terminal tab, run:
You can add the models, commands, and relationships to your API by tracking them which generates the HML files.

```bash
ddn connector-link update turso --subgraph my_subgraph
ddn connector-link add-resources turso
```

After this command runs, you can open your `my_subgraph/metadata/turso.hml` file and see your metadata completely
scaffolded out for you 🎉

## Documentation

View the full documentation for the Turso connector [here](https://github.com/hasura/ndc-turso/blob/main/docs/index.md).
Expand Down
4 changes: 2 additions & 2 deletions connector-definition/connector-metadata.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
packagingDefinition:
type: PrebuiltDockerImage
dockerImage: ghcr.io/hasura/ndc-turso:v0.0.32
dockerImage: ghcr.io/hasura/ndc-turso:v0.1.0
supportedEnvironmentVariables:
- name: TURSO_URL
description: The url for the Turso database
Expand All @@ -9,7 +9,7 @@ supportedEnvironmentVariables:
commands:
update:
type: Dockerized
dockerImage: ghcr.io/hasura/ndc-turso:v0.0.32
dockerImage: ghcr.io/hasura/ndc-turso:v0.1.0
commandArgs:
- update
dockerComposeWatch:
Expand Down
27 changes: 7 additions & 20 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,36 +34,23 @@ To start the connector on port 9094, for a Turso hosted database instance run:

### Attach the connector to the locally running engine

There should a file located at `my_subgraph/.env.my_subgraph` that contains
There should a file located at `.env` that contains

```env
MY_SUBGRAPH_TURSO_READ_URL=http://local.hasura.dev:<port>
MY_SUBGRAPH_TURSO_WRITE_URL=http://local.hasura.dev:<port>
APP_TURSO_READ_URL="http://local.hasura.dev:<port>"
APP_TURSO_WRITE_URL="http://local.hasura.dev:<port>"
```

Create a new .env file called `.env.my_subgraph.dev` and place the following values into it:
Edit the values in the `.env` file to point at port 9094 with the locally running connector.

```env
MY_SUBGRAPH_TURSO_READ_URL=http://local.hasura.dev:9094
MY_SUBGRAPH_TURSO_WRITE_URL=http://local.hasura.dev:9094
```

In your `supergraph.yaml` file change the env file to point to the dev file.

```
subgraphs:
my_subgraph:
generator:
rootPath: my_subgraph
# envFile: my_subgraph/.env.my_subgraph # Change the env file
envFile: my_subgraph/.env.my_subgraph.dev
includePaths:
- my_subgraph/metadata
APP_TURSO_READ_URL="http://local.hasura.dev:9094"
APP_TURSO_WRITE_URL="http://local.hasura.dev:9094"
```

Do a local supergraph build:

```ddn supergraph build local --output-dir ./engine```
```ddn supergraph build local```

Mutations and Queries will now be issued against your locally running connector instance.

Expand Down
2 changes: 1 addition & 1 deletion docs/support.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The documentation and community will help you troubleshoot most issues. If you have encountered a bug or need to get in touch with us, you can contact us using one of the following channels:
* Support & feedback: [Discord](https://discord.gg/hasura)
* Issue & bug tracking: [GitHub issues](https://github.com/hasura/ndc=[connectorName]/issues)
* Issue & bug tracking: [GitHub issues](https://github.com/hasura/ndc-turso/issues)
* Follow product updates: [@HasuraHQ](https://twitter.com/hasurahq)
* Talk to us on our [website chat](https://hasura.io)

Expand Down
4 changes: 2 additions & 2 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "ndc-turso",
"version": "0.0.32",
"version": "0.1.0",
"main": "index.js",
"author": "Tristen Harr",
"scripts": {
Expand Down

0 comments on commit ef4b124

Please sign in to comment.