1
1
# Cockpit Continuous Integration tasks
2
2
3
- This is the container and configuration for the Cockpit integration tests and
4
- automated maintenance tasks. This documentation is for deployment on Fedora
5
- 35+, Fedora CoreOS, or RHEL 8+.
3
+ This is the [ container] ( ./container ) and deployment scripts for the Cockpit
4
+ integration tests and automated maintenance tasks.
6
5
7
6
The container has optional mounts:
8
7
9
- * ` /secrets ` : A directory for tasks specific secrets, with at least the following files:
10
- * ` s3-keys/* ` : files with S3 access tokens for image upload/download and task log bucket
11
- * ` s3-server.{pem,key} ` : TLS certificate for local S3 image cache container
12
- * ` /run/secrets/webhook ` : A directory for secrets shared with the webhook container, with the following files:
13
- * ` .config--github-token ` : GitHub token to create and update issues and PRs
8
+ * A directory for image files. Defined by ` $COCKPIT_IMAGES_DATA_DIR ` env
9
+ variable, conventionally ` /cache/images ` . On production hosts, this is
10
+ mounted from ` /var/cache/cockpit-tasks/images ` .
11
+ * S3 access tokens for image and log buckets. Defined by ` $COCKPIT_S3_KEY_DIR `
12
+ env variable, conventionally ` /run/secrets/s3-keys ` .
13
+ On production hosts, this is mounted from ` /var/lib/cockpit-secrets/tasks/s3-keys ` .
14
+ * A directory for GitHub and AMQP secrets. Used by both the tasks and the the webhook container.
15
+ Must be in ` /run/secrets/webhook ` (bots currently assumes that).
16
+ * ` .config--github-token ` : GitHub token to create and update issues and PRs.
14
17
* ` amqp-{client,server}.{pem,key} ` : TLS certificates for RabbitMQ
15
18
* ` ca.pem ` : The general cockpit CI Certificate Authority which signed the above AMQP certificates
16
- * ` /cache ` : A directory for reusable cached data such as downloaded image files
17
-
18
- The mounts normally default to ` /var/lib/cockpit-secrets/tasks ` ,
19
- ` /var/lib/cockpit-secrets/webhook ` , and ` /var/cache/cockpit-tasks ` on the host.
19
+ On production hosts, this is mounted from ` /var/lib/cockpit-secrets/webhook ` .
20
20
21
21
To generate the [ certificates needed for cross-cluster AMQP] ( https://www.rabbitmq.com/ssl.html ) authentication,
22
22
run the [ credentials/webhook/generate.sh script] ( ./credentials/webhook/generate.sh ) script.
@@ -28,31 +28,27 @@ Run either script in the target directory (e.g.
28
28
# Deploying/updating on our CI infrastructure
29
29
30
30
This happens through [ Ansible] ( ../ansible/ ) depending on the target cloud.
31
-
32
- Some helpful commands:
33
-
34
- # journalctl -fu cockpit-tasks@*
35
- # systemctl stop cockpit-tasks@*
31
+ These tasks containers controlled by systemd units ` cockpit-tasks@* ` .
36
32
37
33
# Deploying on OpenShift
38
34
39
- The testing machines can run on OpenShift cluster(s), as long as they have
40
- support for ` /dev/kvm ` in containers. Otherwise they will only be able to
41
- process non-test tasks (such as processing the ` statistics ` or ` webhook `
42
- queues).
35
+ OpenShift primarily runs the GitHub webhook responder and AMQP server.
43
36
44
- If you run tests, you need a persistent shared volume for locally caching
45
- images. Create it with
37
+ As ` /dev/kvm ` support on OpenShift is hard to come by, current bots
38
+ ` job-runner ` and the deployment resources currently only support a tasks
39
+ container which processes the ` statistics ` and ` webhook ` queues.
40
+
41
+ You need a persistent shared volume for ` test-results.db ` and the Prometheus
42
+ database. Create it with
46
43
47
44
oc create -f tasks/images-claim-centosci.yaml
48
45
49
46
Now create all the remaining kubernetes objects. The secrets are created from
50
- the ` /var/lib/cockpit-secrets/tasks ` directory as described above. For the
51
- webhook secrets a github token ` ~/.config/github-webhook-token ` should be
52
- present.
47
+ the ` /var/lib/cockpit-secrets/* ` directories as described above:
53
48
54
49
make tasks-secrets | oc create -f -
55
- oc create -f tasks/cockpit-tasks.json
50
+ oc create -f tasks/cockpit-tasks-webhook.json
51
+ oc create -f tasks/cockpit-tasks-centosci.json
56
52
57
53
## Troubleshooting
58
54
@@ -62,54 +58,59 @@ Some helpful commands:
62
58
oc describe pods
63
59
oc log -f cockpit-tasks-xxxx
64
60
65
- Service affinity currently wants all the cockpit-tasks pods to be in the same region.
66
- If you have your own cluster make sure all the nodes are in the same region:
61
+ # Deploying locally for development, integration tests
67
62
68
- oc patch node node.example.com -p '{"metadata":{"labels": {"region": "infra"}}}'
63
+ For hacking on the webhook, task container, bots infrastructure,, or validating
64
+ new container images, you can also run a [ podman pod] ( http://docs.podman.io/en/latest/pod.html )
65
+ locally with RabbitMQ, webhook, minio S3, and tasks containers.
66
+ Without arguments this will run some purely local integration tests:
69
67
70
- ## Scaling
68
+ tasks/run-local.sh
71
69
72
- We can scale the number of testing machines in the openshift cluster with this
73
- command:
70
+ This will also generate the secrets in a temporary directory, unless they
71
+ already exist in ` tasks/credentials/ ` . By default this will use the
72
+ [ ` quay.io/cockpit/tasks:latest ` ] ( https://quay.io/repository/cockpit/tasks?tab=tags )
73
+ container, but you can run a different tag by setting ` $TASKS_TAG ` .
74
74
75
- oc scale rc cockpit-tasks --replicas=3
75
+ You can also test the whole GitHub → webhook → tasks → GitHub status workflow
76
+ on some cockpituous PR with specifying the PR number and a GitHub token:
76
77
77
- # Deploying locally for development
78
+ tasks/run-local.sh -p 123 -t ~/.config/cockpit-dev/github-token
78
79
79
- For hacking on the webhook, image, or task container, or validating new container
80
- images, you can also run a simple [ podman pod] ( http://docs.podman.io/en/latest/pod.html )
81
- locally with RabbitMQ, webhook, images, and tasks containers:
80
+ This will run tests-scan/tests-trigger on the given PR and trigger an
81
+ [ unit-tests] ( ../.cockpit-ci/run ) test which simply does ` make check ` .
82
82
83
- $ tasks/run-local.sh
83
+ You can get an interactive shell with
84
84
85
- This will also generate the secrets in a temporary directory, unless they
86
- already exist in ` tasks/credentials/ ` . By default this will use the
87
- ` quay.io/cockpit/{tasks,images}:latest ` containers, but you can run a different
88
- tag by setting ` $TASKS_TAG ` and/or ` $IMAGES_TAG ` .
85
+ tasks/run-local.sh -i
89
86
90
- This currently does not yet have any convenient way to inject arbitrary jobs
91
- into the AMQP queue; this will be provided at a later point. However, you can
92
- test the whole GitHub → webhook → tasks → GitHub status workflow on some
93
- cockpituous PR with specifying the PR number and a GitHub token:
87
+ to run things manually. For example, use ` publish-queue ` to inject a job into
88
+ AMQP, or run ` job-runner ` or some bots command.
94
89
95
- $ tasks/run-local.sh -p 123 -t ~/.config/github-token
90
+ # Running with toolbx
96
91
97
- This will run tests-scan/tests-trigger on the given PR and trigger an
98
- [ unit-tests] ( ../.cockpit-ci/run ) test which simply does ` make check ` .
92
+ This container can also be used for local development with
93
+ [ toolbx] ( https://containertoolbx.org/ ) , to get an "official" Cockpit
94
+ development environment that's independent from the host:
99
95
100
- # Running single container locally
96
+ ``` sh
97
+ toolbox create --image quay.io/cockpit/tasks cockpit
98
+ toolbox enter cockpit
99
+ ```
100
+
101
+ # Running single container with production-like resources
101
102
102
103
When you want to debug a problem with a test which may be sensitive to its
103
- particular environment (such as calibrating RAM, /dev/shm sizes, or behaviour
104
- of libvirt in a container, etc.), you can run the tasks container directly with
105
- podman. The production parameters are set in the
106
- [ install-service] ( ./install-service ) script. You don't need secrets, custom
107
- networks, or most environment settings, the crucial parts are the memory,
108
- device, and image cache configurations.
104
+ particular resource configuration (such as calibrating RAM, /dev/shm sizes, or
105
+ behaviour of libvirt in a container, etc.), you can run the tasks container
106
+ directly with podman. The production parameters are set in the
107
+ ` job-runner.toml ` file in the
108
+ [ tasks-systemd Ansible role] ( ../ansible/roles/tasks-systemd/tasks/main.yml ) .
109
+ You don't need secrets, custom networks, or most environment settings, the
110
+ crucial parts are the memory, device, and image cache configurations.
109
111
110
- First of all, if you want to share your host's image cache (which is really a
111
- good idea), temporarily make it writable to the unprivileged user in the
112
- container:
112
+ If you want to share your host's image cache (which is really a good idea),
113
+ temporarily make it writable to the unprivileged user in the container:
113
114
114
115
``` sh
115
116
chmod o+w ~ /.cache/cockpit-images
@@ -215,7 +216,7 @@ sequenceDiagram
215
216
(2) a cockpit/tasks container that runs the actual
216
217
[ webhook] ( https://github.com/cockpit-project/cockpituous/blob/main/tasks/webhook ) .
217
218
218
- See the [ Kubernetes resources] ( https://github.com/cockpit-project/cockpituous/blob/main/tasks /cockpit-tasks-webhook.yaml)
219
+ See the [ Kubernetes resources] ( . /cockpit-tasks-webhook.yaml)
219
220
for details about the route, service, and pod.
220
221
221
222
That webhook is a fairly straightforward piece of Python that routes the
@@ -245,23 +246,13 @@ sequenceDiagram
245
246
* Some cockpit/tasks bot picks up the event payload from the "webhook" queue,
246
247
and interprets it with [ tests-scan] ( https://github.com/cockpit-project/bots/blob/main/tests-scan )
247
248
or [ issue-scan] ( https://github.com/cockpit-project/bots/blob/main/issue-scan )
248
- depending on the event type. This results in a shell command like
249
- ` tests-invoke [...] ` , ` npm-update [...] ` , or similar. If this involves any
250
- Red Hat internal resources, like RHEL or Windows images, that command gets
251
- put into the "internal" queue, otherwise into the "public" queue.
249
+ depending on the event type. This results in a
250
+ [ job-runner JSON task] ( https://github.com/cockpit-project/bots/blob/main/job-runner )
251
+ or a shell command like ` prometheus-stats ` , or similar. If this involves any
252
+ Red Hat internal resources, like RHEL images, that command gets put into the
253
+ "internal" queue, otherwise into the "public" queue.
252
254
253
- * Some cockpit/tasks bot picks up the shell command from the internal or
255
+ * Some cockpit/tasks bot picks up the task from the internal or
254
256
public queue (depending on whether it has access to Red Hat internal
255
257
infrastructure), executes it, publishes the log, updates the GitHub status,
256
258
and finally acks the queue item.
257
-
258
- # Using with toolbx
259
-
260
- This container can also be used for local development with
261
- [ toolbx] ( https://containertoolbx.org/ ) , to get an "official" Cockpit
262
- development environment that's independent from the host:
263
-
264
- ``` sh
265
- toolbox create --image quay.io/cockpit/tasks cockpit
266
- toolbox enter cockpit
267
- ```
0 commit comments