You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/serverless/google_cloud/_index.md
+4Lines changed: 4 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -18,3 +18,7 @@ We offer two mechanisms for instrumenting serverless Google Cloud code: sidecar
18
18
The sidecar containers are the suggested way to instrument these applications, deploying the datadog agent in a separate container that runs alongside your code. (add a few details about why we suggest this over in-process.) Detailed information for setting these up can be found at [Google Cloud Run](./google_cloud_run), [Google Cloud Run Functions](./google_cloud_run_functions), and [Google Cloud Run Functions (1st generation)](./google_cloud_run_functions_1st_gen).
19
19
20
20
We continue to support the in-process. (add a few details and caveats.) Detailed information for setting up [Google Cloud Run In-Process](./google_cloud_run_in_process).
21
+
22
+
### Sidecar vs. In-Process Instrumentation for Google Cloud Run
23
+
24
+
A discussion of the tradeoffs between instrumenting Google Cloud Run services with a sidecar vs. in-process. Touch on cost things, performance, and decoupling your application from the Datadog tooling.
text: 'Instrument Google Cloud Run applications with the new Datadog Agent sidecar'
5
9
---
6
10
7
11
## Overview
8
12
9
-
[Google Cloud Run](https://cloud.google.com/run/docs/overview/what-is-cloud-run) is a way to run container-based services and jobs in Google Cloud.
10
-
11
-
Note about service monitors and logs through the Google Cloud Integration.
13
+
[Google Cloud Run][1] is a fully managed serverless platform for deploying and scaling container-based applications in Google Cloud. Datadog provides metrics and logs collection for these services through our [Google Cloud Integration][2]. This page describes the process of instrumenting your application code running in Google Cloud Run.
12
14
13
15
## Setup
14
16
15
-
Traces and custom metrics are provided by our tracer libraries, along with profiling where it is available. Application logs are collected through a volume shared between the application container and the Datadog sidecar container.
17
+
<divclass="alert alert-info">To instrument your Google Cloud Run applications with in-process instrumentation, see <ahref="./google_cloud_run_in_process.">Google Cloud Run In-Process</a>. For details on the tradeoffs between the Sidecar instrumentation described here and In-Process instrumentation, see <ahref="./#sidecar-vs.-in-process-instrumentation-for-google-cloud-run">Sidecar vs. In-Process Instrumentation for Google Cloud Run</a>.</div>
18
+
19
+
The overall process for instrumenting Google Cloud Run applications is to install a tracer and use a [Sidecar][3] to collect the custom metrics and traces from your application. The application is configured to write its logs to a volume shared with the sidecar which then forwards them to Datadog.
16
20
17
21
### Applications
18
22
23
+
Set up a Datadog tracing library, configure the application to send `dogstatsd` metrics to port `8125`, and send correctly-formatted logs to the shared volume.
24
+
25
+
For custom metrics, use the [Distribution Metrics][4] to correctly aggregate data from multiple Google Cloud Run instances.
26
+
19
27
{{< tabs >}}
20
28
{{% tab "Node.js" %}}
21
29
#### Example Code
30
+
22
31
```js
23
-
// add the example code here, with traces, custom metrics, profiling, and logs
32
+
// the tracer also includes a dogstatsd client
33
+
consttracer=require('dd-trace').init({
34
+
logInjection:true,
35
+
});
36
+
37
+
constexpress=require("express");
38
+
constapp=express();
39
+
40
+
const { createLogger, format, transports } =require('winston');
res.status(200).json({ msg:"A traced endpoint with custom metrics" });
54
+
});
55
+
56
+
constport=process.env.PORT||8080;
57
+
app.listen(port);
24
58
```
25
59
26
60
#### Details
27
-
##### Tracing
61
+
The `dd-trace-js` library provides support for [Tracing][1], [Metrics][2], and [Profiling][3].
28
62
29
-
##### Profiling
63
+
Set the `NODE_OPTIONS="--require dd-trace/init"` environment variable in your docker container to include the `dd-trace/init` module when the Node.js process starts.
30
64
31
-
##### Metrics
65
+
Application [Logs][4] need to be sent to a file that the sidecar container can access. The container setup is detailed [below](#containers). [Log and Trace Correlation][5] possible when logging is combined with the `dd-trace-js` library. The log files are identified by a `DD_SERVERLESS_LOG_PATH` environment variable, usually `/shared-volume/logs/*.log` to pick up all of the files ending in `.log` in the `/shared-volume/logs` directory.
@@ -117,19 +155,47 @@ Traces and custom metrics are provided by our tracer libraries, along with profi
117
155
118
156
### Containers
119
157
120
-
A high level overview of the things we need to do, including a shared volume for logs, the environment variables we need to set up on the application container and the sidecar container.
158
+
A sidecar `gcr.io/datadoghq/serverless-init:latest` container is used to collect telemetry from your application container and send it to datadog. The sidecar container is configured with a healthcheck for correct starup and a shared volume for log forwarding.
121
159
122
160
#### Environment Variables
123
161
124
-
A table of the important environment variables, which container they are set on, and some notes about them.
162
+
| Variable | Container | Description |
163
+
| -------- | --------- | ----------- |
164
+
|`DD_SERVERLESS_LOG_PATH`| Both | The path where the agent will look for logs. For example `/shared-volume/logs/*.log`. |
165
+
|`DD_API_KEY`| Sidecar |[Datadog API key][5] - **Required**|
|`DD_LOGS_INJECTION`| Sidecar | When true, enrich all logs with trace data for supported loggers in [Java][7], [Node][8], [.NET][9], and [PHP][10]. See additional docs for [Python][11], [Go][12], and [Ruby][13]. |
168
+
|`DD_SERVICE`| Both | See [Unified Service Tagging][14]. |
169
+
|`DD_VERSION`| Sidecar | See [Unified Service Tagging][14]. |
170
+
|`DD_ENV`| Sidecar | See [Unified Service Tagging][14]. |
171
+
|`DD_SOURCE`| Sidecar | See [Unified Service Tagging][14]. |
172
+
|`DD_TAGS`| Sidecar | See [Unified Service Tagging][14]. |
173
+
|`DD_HEALTH_PORT`| Sidecar | The port for sidecar health checks. For example `9999`|
174
+
175
+
The `DD_LOGS_ENABLED` environment variable is not required.
125
176
126
177
{{< tabs >}}
127
178
{{% tab "GCR UI" %}}
128
-
1. Step
129
-
1. by
130
-
1. step
131
-
1. instructions
132
-
- with some details.
179
+
1. In the Cloud Run service page, select **Edit & Deploy New Revision**.
180
+
1. Open the **Volumes** main tab and create a new volume for log forwarding.
181
+
1. Make an `In-Memory` volume called `shared-logs`.
182
+
1. You may set a size limit if necessary.
183
+
1. Open the **Containers** main tab and click **Add Container** to add a new `gcr.io/datadoghq/serverless-init:latest` sidecar container.
184
+
1. Click **Add health check** to add a `Startup check` for the container.
185
+
1. Select the `TCP`**probe type**.
186
+
2. Choose any free port (`9999`, for example). We will need this port number shortly for the `DD_HEALTH_PORT` variable.
187
+
1. Click the **Variables & Secrets** tab and add the required environment variables.
188
+
- The `DD_HEALTH_PORT` variable should be the port for the TCP health check you configured.
189
+
- The `DD_SERVERLESS_LOG_PATH` variable should be set to `<volume-mount-name>/logs/*.log`. You will set the `<volume-mount-name>` shortly, most likely to `/shared-logs`, so `/shared-logs/logs/*.log`.
190
+
- See the table above for the other required [Environment Variables](#environment-variables).
191
+
1. Click the **Volume Mounts** tab and add the logs volume mount.
192
+
- Mount it at the location that matches the `DD_SERVERLESS_LOG_PATH`, for example `/shared-logs`.
193
+
1. Edit the application container.
194
+
1. Click the **Volume Mounts** tab and add the logs volume mount.
195
+
- Mount it to the same location that you did for the sidecar container, one that matches the `DD_SERVERLESS_LOG_PATH` environment variable, for example `/shared-logs`.
196
+
1. Click the **Variables & Secrets** tab and set the `DD_SERVICE` and `DD_SERVERLESS_LOG_PATH a environment varialbes as you did for the sidecar.
197
+
1. Click the **Settings** tab and set the **Container start up order** to **Depends on** the sidecar container.
198
+
1.**Deploy** the application.
133
199
{{% /tab %}}
134
200
{{% tab "YAML deploy" %}}
135
201
1. Step
@@ -146,3 +212,23 @@ A table of the important environment variables, which container they are set on,
0 commit comments