Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion content/en/serverless/google_cloud/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ aliases:
## Overview

A brief introduction to the Google Cloud serverless offerings.
- [Google Cloud Run](./google_cloud_run) for deploying container-based services and jobs ([official docs](https://cloud.google.com/run/docs/overview/what-is-cloud-run)).
- [Google Cloud Run](./google_cloud_run) for deploying container-based services ([official docs](https://cloud.google.com/run/docs/overview/what-is-cloud-run)). Our instrumentation does not yet support Google Cloud Run Jobs, only Services.
- [Google Cloud Run Funcitons](./google_cloud_run_functions) for deploying code that gets packaged into container-based services and jobs running on Google Cloud Run infrastructure ([official docs](...)).
- [Google Cloud Run Functions (1st generation)](./google_cloud_run_functions_1st_gen) for deploying code that gets run on the legacy Cloud Functions infrastructure ([official docs](...)).

Expand All @@ -18,3 +18,7 @@ We offer two mechanisms for instrumenting serverless Google Cloud code: sidecar
The sidecar containers are the suggested way to instrument these applications, deploying the datadog agent in a separate container that runs alongside your code. (add a few details about why we suggest this over in-process.) Detailed information for setting these up can be found at [Google Cloud Run](./google_cloud_run), [Google Cloud Run Functions](./google_cloud_run_functions), and [Google Cloud Run Functions (1st generation)](./google_cloud_run_functions_1st_gen).

We continue to support the in-process. (add a few details and caveats.) Detailed information for setting up [Google Cloud Run In-Process](./google_cloud_run_in_process).

### Sidecar vs. In-Process Instrumentation for Google Cloud Run

A discussion of the tradeoffs between instrumenting Google Cloud Run services with a sidecar vs. in-process. Touch on cost things, performance, and decoupling your application from the Datadog tooling.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we'll do this in a future PR. i'd like to get some actual numbers for this if possible.

165 changes: 147 additions & 18 deletions content/en/serverless/google_cloud/google_cloud_run.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,110 @@
---
title: Google Cloud Run
title: Google Cloud Run Services
aliases:
- /serverless/gcp/gcr
further_reading:
- link: 'https://www.datadoghq.com/blog/instrument-cloud-run-with-datadog-sidecar/'
tag: 'Blog'
text: 'Instrument Google Cloud Run applications with the new Datadog Agent sidecar'
---

## Overview

[Google Cloud Run](https://cloud.google.com/run/docs/overview/what-is-cloud-run) is a way to run container-based services and jobs in Google Cloud.

Note about service monitors and logs through the Google Cloud Integration.
[Google Cloud Run][1] is a fully managed serverless platform for deploying and scaling container-based applications in Google Cloud. Datadog provides metrics and logs collection for these services through our [Google Cloud Integration][2]. This page describes the process of instrumenting your application code running in Google Cloud Run. We only support Google Cloud Run Services, not Google Cloud Run Jobs.

## Setup

Traces and custom metrics are provided by our tracer libraries, along with profiling where it is available. Application logs are collected through a volume shared between the application container and the Datadog sidecar container.
<div class="alert alert-info">To instrument your Google Cloud Run applications with in-process instrumentation, see <a href="./google_cloud_run_in_process">Google Cloud Run In-Process</a>. For details on the tradeoffs between the Sidecar instrumentation described here and In-Process instrumentation, see <a href="./#sidecar-vs.-in-process-instrumentation-for-google-cloud-run">Sidecar vs. In-Process Instrumentation for Google Cloud Run</a>.</div>

The recommended process for instrumenting Google Cloud Run applications is to install a tracer and use a [Sidecar][3] to collect the custom metrics and traces from your application. The application is configured to write its logs to a volume shared with the sidecar which then forwards them to Datadog.

### Applications

Set up a Datadog tracing library, configure the application to send `dogstatsd` metrics to port `8125`, and send correctly-formatted logs to the shared volume.

For custom metrics, use the [Distribution Metrics][4] to correctly aggregate data from multiple Google Cloud Run instances.

{{< tabs >}}
{{% tab "Node.js" %}}
#### Example Code
Add the `dd-trace-js` [library][1] to your application.

#### app.js
```js
// add the example code here, with traces, custom metrics, profiling, and logs
// The tracer includes a dogstatsd client. The tracer is actually started with `NODE_OPTIONS`
// so that we can take advantage of startup tracing.
// The tracer will inject the current trace ID into logs with `DD_LOGS_INJECTION`.
// The tracer will send profiling information with `DD_PROFILING_ENABLED`.
const tracer = require('dd-trace').init();

const express = require("express");
const app = express();

const { createLogger, format, transports } = require('winston');

// We can use the DD_SERVERLESS_LOG_PATH environment variable if it is available.
// While this is not necessary, it the log forwarding configuration centralized
// in the cloud run configuration.
const logFilename = process.env.DD_SERVERLESS_LOG_PATH?.replace("*.log", "app.log") || "/shared-logs/logs/app.log";
console.log(`writing logs to ${logFilename}`);

const logger = createLogger({
level: 'info',
exitOnError: false,
format: format.json(),
transports: [new transports.File({ filename: logFilename })],
});

app.get("/", (_, res) => {
logger.info("Hello!");
tracer.dogstatsd.distribution("our-sample-app.sample-metric", 1);

res.status(200).json({ msg: "A traced endpoint with custom metrics" });
});

const port = process.env.PORT || 8080;
app.listen(port);
```

You can use `npm install dd-trace` to add the tracer to your package.

#### Dockerfile

Your `Dockerfile` can look something like this. This will create a minmimal application container with metrics, traces, logs, and profiling. Note that the dockerfile needs to be built for the the x86_64 architecture (use the `--platform linux/arm64` parameter for `docker build`).

```dockerfile
FROM node:22-slim

COPY app.js package.json package-lock.json .
RUN npm ci --only=production

# Initialize the tracer
ENV NODE_OPTIONS="--require dd-trace/init"

EXPOSE 8080

CMD ["node", "app.js"]
```

#### Details
##### Tracing
The `dd-trace-js` library provides support for [Tracing][1], [Metrics][2], and [Profiling][3].

##### Profiling
Set the `NODE_OPTIONS="--require dd-trace/init"` environment variable in your docker container to include the `dd-trace/init` module when the Node.js process starts.

##### Metrics
Application [Logs][4] need to be sent to a file that the sidecar container can access. The container setup is detailed [below](#containers). [Log and Trace Correlation][5] possible when logging is combined with the `dd-trace-js` library. The sidecar finds log files based on the `DD_SERVERLESS_LOG_PATH` environment variable, usually `/shared-volume/logs/*.log` which will forward all of files ending in `.log` in the `/shared-volume/logs` directory. The application container needs the `DD_LOGS_INJECTION` environment variable to be set since we are using `NODE_OPTIONS` to actually start our tracer. If you do not use `NODE_OPTIONS`, call the `dd-trace` `init` method with the `logInjection: true` configuration parameter:

##### Logs
```js
const tracer = require('dd-trace').init({
logInjection: true,
});
```

Set `DD_PROFILING_ENABLED` to enable [Profiling][3].

[1]: /tracing/trace_collection/automatic_instrumentation/dd_libraries/nodejs/#getting-started
[2]: /metrics/custom_metrics/dogstatsd_metrics_submission/#code-examples
[3]: https://docs.datadoghq.com/profiler/enabling/nodejs?tab=environmentvariables
[4]: /logs/log_collection/nodejs/?tab=winston30
[5]: /tracing/other_telemetry/connect_logs_and_traces/nodejs

{{% /tab %}}
{{% tab "Python" %}}
Expand Down Expand Up @@ -117,19 +191,50 @@ Traces and custom metrics are provided by our tracer libraries, along with profi

### Containers

A high level overview of the things we need to do, including a shared volume for logs, the environment variables we need to set up on the application container and the sidecar container.
A sidecar `gcr.io/datadoghq/serverless-init:latest` container is used to collect telemetry from your application container and send it to datadog. The sidecar container is configured with a healthcheck for correct starup, and a shared volume for log forwarding, and the environment variables documented [below](#environment-variables).

#### Environment Variables

A table of the important environment variables, which container they are set on, and some notes about them.
| Variable | Container | Description |
| -------- | --------- | ----------- |
| `DD_SERVERLESS_LOG_PATH` | Sidecar (and Application, see notes) | The path where the agent will look for logs. For example `/shared-volume/logs/*.log`. - **Required** |
| `DD_API_KEY`| Sidecar | [Datadog API key][5] - **Required**|
| `DD_SITE` | Sidecar | [Datadog site][6] - **Required** |
| `DD_LOGS_INJECTION` | Sidecar *and* Application | When `true`, enrich all logs with trace data for supported loggers in [Java][7], [Node][8], [.NET][9], and [PHP][10]. See additional docs for [Python][11], [Go][12], and [Ruby][13]. See also the details for your runtime above. |
| `DD_SERVICE` | Sidecar *and* Application | See [Unified Service Tagging][14]. |
| `DD_VERSION` | Sidecar | See [Unified Service Tagging][14]. |
| `DD_ENV` | Sidecar | See [Unified Service Tagging][14]. |
| `DD_TAGS` | Sidecar | See [Unified Service Tagging][14]. |
| `DD_HEALTH_PORT` | Sidecar | The port for sidecar health checks. For example `9999` |

The `DD_SERVERLESS_LOG_PATH` environment variable is not required on the application. But it can be set there and then used to configure the application's log filename. This avoids manually synchronizing the Cloud Run service's log path with the application code that writes to it.

The `DD_LOGS_ENABLED` environment variable is not required.

TODO: write something about `DD_SOURCE`.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we'll do this in a future pr. this has to do with our logs pipelines and logs/trace correlation.


{{< tabs >}}
{{% tab "GCR UI" %}}
1. Step
1. by
1. step
1. instructions
- with some details.
1. On the Cloud Run service page, select **Edit & Deploy New Revision**.
1. Open the **Volumes** main tab and create a new volume for log forwarding.
1. Make an `In-Memory` volume called `shared-logs`.
1. You may set a size limit if necessary.
1. Open the **Containers** main tab and click **Add Container** to add a new `gcr.io/datadoghq/serverless-init:latest` sidecar container.
1. Click **Add health check** to add a `Startup check` for the container.
1. Select the `TCP` **probe type**.
2. Choose any free port (`9999`, for example). We will need this port number shortly for the `DD_HEALTH_PORT` variable.
1. Click the **Variables & Secrets** tab and add the required environment variables.
- The `DD_HEALTH_PORT` variable should be the port for the TCP health check you configured.
- The `DD_SERVERLESS_LOG_PATH` variable should be set to `/shared-logs/logs/*.log` where `/shared-logs` is the volume mount point we will use in the next step.
- See the table above for the other required and suggested [Environment Variables](#environment-variables).
1. Click the **Volume Mounts** tab and add the logs volume mount.
- Mount it at the location that matches the prefix of `DD_SERVERLESS_LOG_PATH`, for example `/shared-logs` for a `/shared-logs/logs/*.log` log path.
1. Edit the application container.
1. Click the **Volume Mounts** tab and add the logs volume mount.
- Mount it to the same location that you did for the sidecar container, for example `/shared-logs`.
1. Click the **Variables & Secrets** tab and set the `DD_SERVICE` and `DD_LOGS_INJECTION` environment variables as you did for the sidecar.
1. Click the **Settings** tab and set the **Container start up order** to **Depends on** the sidecar container.
1. **Deploy** the application.
{{% /tab %}}
{{% tab "YAML deploy" %}}
1. Step
Expand All @@ -146,3 +251,27 @@ A table of the important environment variables, which container they are set on,
- with some details.
{{% /tab %}}
{{< /tabs >}}


### Add a `service` label
Add a `service` label which matches the `DD_SERVICE` value on the containers to the Google Cloud service. Access this through the service list, through the **Labels** button after selecting the service.


## Futher Reading

{{< partial name="whats-next/whats-next.html" >}}

[1]: https://cloud.google.com/run/docs/overview/what-is-cloud-run
[2]: /integrations/google_cloud_platform
[3]: https://cloud.google.com/run/docs/deploying#sidecars
[4]: /metrics/distributions
[5]: /account_management/api-app-keys/#api-keys
[6]: /getting_started/site/
[7]: /tracing/other_telemetry/connect_logs_and_traces/java/?tab=log4j2
[8]: /tracing/other_telemetry/connect_logs_and_traces/nodejs
[9]: /tracing/other_telemetry/connect_logs_and_traces/dotnet?tab=serilog
[10]: /tracing/other_telemetry/connect_logs_and_traces/php
[11]: /tracing/other_telemetry/connect_logs_and_traces/python
[12]: /tracing/other_telemetry/connect_logs_and_traces/go
[13]: /tracing/other_telemetry/connect_logs_and_traces/ruby
[14]: /getting_started/tagging/unified_service_tagging/
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,5 @@ title: Google Cloud Run In-Process
---

## Overview

### TODO: mark buildpacks as deprecated
Loading