You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We run a daily loadtest which deploys each sample to staging. This test is only really useful if it is passing most of the time. That's the only way we will notice new failures.
To help us get more signal than noise, let's do the following:
Reduce the number of samples deployed every day.
Ideally we should pick a handful of the most complex samples so that we exercise more of the system. This list will change over time.
This will serve as a sanity check for the platform. We will move sample testing to the samples repo.
Avoid loading configuration from workflow secrets. Maybe hardcode them in a yaml file? These aren't truly sensitive secret values, so we can safely handle them with less care.
Partition the configuration by sample name: The list of configuration values is currently flat, so we don't know which configuration values are relevant to which sample. Scoping each configuration list by sample name would be a helpful improvement.
Create a github actions workflow in the DefangLabs/samples repo which will test a sample when it is changed in a PR.
Evaluate the PR diff to detect which samples have been modified
For each modified sample, run the following test:
Make sure we can start the containers locally with docker compose up.
Make sure we can deploy to defang with defang compose up.
Here are the samples I think are failing because of misconfiguration:
Configation: We need to set the appropriate config values for each sample. We currently do this by setting a big list of environment variables. Then, we run defang config create <name> for each one. We need to identify the remaining config variables which need to be set and we need to set them to appropriate values so that the samples can be deployed successfully.
I wrote a script which parses all of the sample compose files and prints out the relevant environment. I think the ones which need to be set by config are the ones without a value.
We run a daily loadtest which deploys each sample to staging. This test is only really useful if it is passing most of the time. That's the only way we will notice new failures.
To help us get more signal than noise, let's do the following:
DefangLabs/samples
repo which will test a sample when it is changed in a PR.docker compose up
.defang compose up
.Here are the samples I think are failing because of misconfiguration:
Footnotes:
Sample Deployment Logs: The specific error output for each sample can be downloaded as a workflow artifact here: https://github.com/DefangLabs/defang-mvp/actions/runs/11434935315/artifacts/2081014390
Configation: We need to set the appropriate config values for each sample. We currently do this by setting a big list of environment variables. Then, we run
defang config create <name>
for each one. We need to identify the remaining config variables which need to be set and we need to set them to appropriate values so that the samples can be deployed successfully.I wrote a script which parses all of the sample compose files and prints out the relevant environment. I think the ones which need to be set by config are the ones without a value.
See this gist: https://gist.github.com/jordanstephens/2aa9e9e955a8411290bcaa843cd04524
The text was updated successfully, but these errors were encountered: