In order to improve the self-service capabilities of standard-ci it is
important for projects, that they can add their own secrets to projects (to
reach external services, e.g. docker hub, ...).
Travis has a very nice system which helps engineers there:
Basically the CI system needs to generate a public/private key pair for
every enabled git repo. The engineer simply fetches the public key via a
well know URL and encrypts the secrets. Then the encrypted secret can be
made part of the source repo. Before the tests are run the CI system
decrypts the secrets. Than can play together pretty well with Jenkinsfiles
Less manual intervention from CI team to add secrets to jobs
Strengthen the config-in-code thinking
We actually have a mechanism for secrets as environmental variables in stdci. Currently, we hold the secrets in a single secret file on our Jenkins master, and engineers can request those secrets as environmental variables via *.environment.yaml. The syntax is the same as for Kubernetes:
The example above will bind the password field from MySecret1 to $MY_VAR inside the chroot created by STDCI.
The one part which is still missing though is a common interface to allow projects to add the secrets to this file automatically.
In my opinion, having the secrets encrypted in the code is better, since you don't need accounts for engineers on CI to do that. A public rsa key can be made accessible for the whole world. With other words, the issue is exactly about the self-service and not about how to bind existing secrets. It is also related to OVIRT-1868. In combination with a Jenkinsfile, pretty much the whole yaml/script/chroot can be made obsolete, however that is a different story.
the main issue I have with the concept you suggest is that it chains the code to a specific instance of the CI system.
The point of STDCI is to be a standard - you can take a compliant project and build/test it the same way on different CI systems. They way you suggest to handle secrets - it essentially chains the project to a specific CI system instance - the one that knows the right private key.
This concept makes perfect sense for PAAS providers like Travis that want to lock you into the single instance their platform...
Our view of credentials is also slightly different - instead of a developer providing his own credentials for using service X, he just asks for access for service X, and it becomes the CI system's responsibility to figure out how to provide access to that service.
Having said the above, implementing what you ask for is not difficult, so we may add this soon as an additional feature for our existing credentials support. The main challenge would be to find where to store all the private keys and provide access to the public keys. Our system doesn't really have a UI that is not linked to a specific build/test run, since so far the assumption has always been that all communication with the CI system is done via commits or comments to the SCM.
Here is an implementation scheme that can meet 's UX requirements while still allowing STDCI projects to be portable between CI systems.
First, we adopt or setup an online credentials storage service that has the following features:
It has a UI where users can login and upload or download credentials
It has functionality where it can generate key pairs while storing the private key and making the public key visible.
It supports a oAuth-like flow where a system can request access to certain credentials and the user can confirm or deny it.
Second, we write a secrets provider that allows the user to refer to a set of credentials in the service above (As well as the service itself). When trying to provide the secrets, the system would request access via the credentials storage service.
Third, we write an STDCI service that encapsulates the special-case flow where we get a private key from the secrets provider and use it tio decrypt files from the Git repo.
KubeVirt moved to Prow, and AFAIU this feature isn't needed anymore on STDCI.