You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are two options for setting your CrowdStrike client id and secret for SHRA: explicitly in helm or via kubernetes secrets/configmaps.
234
+
235
+
#### Option 1. Explicitly Set CrowdStrike credentials in values_override.yaml
236
+
233
237
In your `values_override.yaml` file, set `crowdstrikeConfig.clientID` and `crowdstrikeConfig.clientSecret` to the values you saved in `FALCON_CLIENT_ID` and `FALCON_CLIENT_SECRET`.
234
238
235
239
For example,
@@ -244,6 +248,53 @@ crowdstrikeConfig:
244
248
| `crowdstrikeConfig.clientID` | required | The client id used to authenticate the self-hosted registry assessment service with CrowdStrike. | "" |
245
249
| `crowdstrikeConfig.clientSecret` | required | The client secret used to authenticate the self-hosted registry assessment service with CrowdStrike. | "" |
246
250
251
+
252
+
#### Option 2. Set CrowdStrike credentials using existing Kubernetes secret or configmap
253
+
254
+
You can use existing Kubernetes secrets or configmaps to provide your CrowdStrike credential environment variables to the SHRA pods. This is useful when you want to manage credential settings separately from the Helm chart deployment.
255
+
256
+
You may already have a secret or configmap you want to use, otherwise create your Kubernetes secret or configmap in the falcon-self-hosted-registry-assessment namespace.
If using a Kubernetes secret, add the following reference to your values_override.yaml file in executor section. Update name to match the name of your secret.
270
+
271
+
```yaml
272
+
executor:
273
+
additionalSecretEnvFrom:
274
+
- name: "crowdstrike-credentials"
275
+
optional: false
276
+
```
277
+
278
+
If using a Kubernetes configmap, add the following references to your values_override.yaml file in the executor section. Update name to match the name of your configmap.
279
+
```yaml
280
+
executor:
281
+
additionalCMEnvFrom:
282
+
- name: "crowdstrike-credentials"
283
+
optional: false
284
+
```
285
+
286
+
The `name` field references the secret name that is already created (here matching the secret we created in step 0):
287
+
288
+
The `optional` field determines whether the pods will start if the secret is missing:
289
+
- `optional: false` (default) - Pods will fail to start if the secret doesn't exist
290
+
- `optional: true` - Pods will start even if the secret is missing
291
+
292
+
After completing the remaining steps to configure and install SHRA, return here to verify that the environment variables are properly configured in the pod's environment:
To strengthen your container supply chain security and maintain security best practices, we recommend you deploy the SHRA containers from your private registry.
@@ -538,7 +589,7 @@ Continue to add additional registries, or proceed to [Validate your registry cre
538
589
539
590
Copy this registry configuration to your `values_override.yaml` file and provide the required information.
540
591
541
-
* `domain_url` and `host` should both be the fully qualified domain name of your Githab installation. The values provided in the example below are for Github cloud.
592
+
* `domain_url` and `host` should both be the fully qualified domain name of your Github installation. The values provided in the example below are for Github cloud.
542
593
543
594
```yaml
544
595
- type: github
@@ -1027,6 +1078,10 @@ Add and adjust the following lines in your `values_override.yaml` file to config
1027
1078
| `jobController.dbStorage.accessModes` | | Array of access modes for the job controller's database claim. | "- ReadWriteOnce" |
1028
1079
| `jobController.dbStorage.storageClass` |required | Storage class to use when creating a persistent volume claim for the job controller database. Examples include "ebs-sc" in AKS and "standard" in GKE. | "" |
1029
1080
1081
+
#### Storage Driver Considerations When Sharing a Persistent Volume Claim for Executor Database Storage
1082
+
1083
+
The primary focus of the Executor Database is to act as a local cache so that SHRA is aware of what it has already scanned and to speed up future assessments. If you are running multiple executor pods, they could share a single database and the database will be shared across all pods making them aware of the work of each other. The `executor.dbStorage.accessModes` field is set to `ReadWriteOnce` by default as this mode is supported by all storage drivers, however it prohibits sharing of a single database volume across pods. If you will be using multiple executor pods and wish to share a executor database, you will need to use the `ReadWriteMany` setting. Be sure that your storageClass supports this mode.
1084
+
1030
1085
#### Change persistent storage retention
1031
1086
1032
1087
To reduce the footprint of the `job controller` database, you can adjust data retention periods for each of the three main jobs it schedules for `executor`.
@@ -1082,7 +1137,11 @@ Configure your temporary storage location by editing the following lines to your
1082
1137
| `executor.assessmentStorage.pvc.create` | | If `true`, creates the persistent volume claim for assessment storage. Default setting is to create a PVC unless you modify the config. | true |
1083
1138
| `executor.assessmentStorage.pvc.existingClaimName` | | An existing storage claim name you wish to use instead of the one created above. Required if `executor.assessmentStorage.pvc.create` is false. | "" |
1084
1139
| `executor.assessmentStorage.pvc.storageClass` |required | Storage class to use when creating a persistent volume claim for the assessment storage. Examples include "ebs-sc" in AKS and "standard" in GKE. | "" |
1085
-
| `executor.assessmentStorage.pvc.accessModes` | | Array of access modes for this database claim. | "- ReadWriteOnce" |
1140
+
| `executor.assessmentStorage.pvc.accessModes` | | Array of access modes for the assessment storage volume claim. | "- ReadWriteOnce" |
1141
+
1142
+
#### Storage Driver Considerations When Sharing a Persistent Volume Claim for Assessment Storage
1143
+
1144
+
The `executor.assessmentStorage.pvc.accessModes` field is set to `ReadWriteOnce` by default as this mode is supported by all storage drivers. If you will be using multiple executor pods to that will share a single volume for unpacking images, you will need to use the `ReadWriteMany` setting. Be sure that your storageClass supports this mode.
1086
1145
1087
1146
### Configure SHRA scaling to meet your scanning needs
1088
1147
@@ -1520,7 +1579,7 @@ The Chart's `values.yaml` file includes more comments and descriptions in-line f
1520
1579
| `executor.assessmentStorage.pvc.create` | | If `true`, creates the persistent volume claim for assessment storage. Default setting is to create a PVC unless you modify the config. | true |
1521
1580
| `executor.assessmentStorage.pvc.existingClaimName` | | An existing storage claim name you wish to use instead of the one created above. Required if `executor.assessmentStorage.pvc.create` is `false`. | "" |
1522
1581
| `executor.assessmentStorage.pvc.storageClass` | required | Storage class to use when creating a persistent volume claim for the assessment storage. Examples include "ebs-sc" in AKS and "standard" in GKE. | "" |
1523
-
| `executor.assessmentStorage.pvc.accessModes` | | Array of access modes for the assessment storage volume claim. | "- ReadWriteOnce" |
1582
+
| `executor.assessmentStorage.pvc.accessModes` | | Array of access modes for the assessment storage volume claim. Please see [Storage Driver Considerations](#storage-driver-considerations-when-sharing-a-persistent-volume-claim-for-assessment-storage) if you should set this to ReadWriteMany | "- ReadWriteOnce" |
1524
1583
| `executor.logLevel` | | Log level for the `executor` service (1:error, 2:warning, 3:info, 4:debug) | 3 |
1525
1584
| `executor.labels` | | Additional labels to apply to the executor pods. | {} |
1526
1585
| `executor.podAnnotations` | | Additional pod annotations to apply to the executor pods. | {} |
0 commit comments