SUSE Cloud Foundry (SCF) is a Cloud Foundry distribution based on the open source version but with several very key differences:
- Uses fissile to containerize the CF components, for running on top of Kubernetes (and Docker)
- CF Components run on an OpenSUSE Stemcell
- CF Apps optionally can run on a preview of the OpenSUSE Stack (rootfs + buildpacks)
Fissile has been around for a few years now and its containerization technology is fairly stable; however deploying directly to kubernetes is relatively new, as is the OpenSUSE stack and stemcell. This means that things are liable to break as we continue development. Specifically links and where things are hosted are still in flux and will most likely break.
For development testing we've mainly been targeting the following so they should be a known working quantity:
OS | Virtualization |
---|---|
OpenSUSE 42.x | Libvirt |
Mac OSX Sierra | VirtualBox |
For more production-like deploys we've been targetting baremetal Kubernetes 1.6.1 (using only 1.5 features) though these deploys currently require the adventurer to be able to debug and problem solve which takes knowledge of the components this repo brings together currently.
- SUSE Cloud Foundry
- Disclaimer
- Table of Contents
- Deploying SCF on Vagrant
- Deploying SCF on Kubernetes
- Development FAQ
- Where do I find logs?
- How do I clear all data and begin anew without rebuilding everything?
- How do I run smoke and acceptance tests?
- fissile refuses to create images that already exist. How do I recreate images?
- My vagrant box is frozen. What can I do?
- Can I target the cluster from the host using the cf CLI?
- How do I connect to the Cloud Foundry database?
- How do I add a new BOSH release to SCF?
- What does my dev cycle look like when I work on Component X?
- How do I expose new settings via environment variables?
- How do I bump the submodules for the various releases?
- Can I suspend or resume my vagrant VM?
- How do I develop an upstream PR?
- How do I publish SCF and BOSH images?
-
We recommend running on a machine with more than 16G of ram for now.
-
You must install vagrant (1.9.5+): https://www.vagrantup.com
-
Install the following vagrant plugins
- vagrant-reload
- vagrant-libvirt (if using libvirt)
Deploying on vagrant is highly scripted and so there should be very little to do to get a working system.
-
Initial repo check out
git clone --recurse-submodules https://github.com/SUSE/scf
-
Building the system
# Bring the vagrant box up vagrant up --provider X # Where X is libvirt | virtualbox # Once the vagrant box is up, ssh into it vagrant ssh # The scf directory you cloned has been mounted into the guest OS, cd into it cd scf # This runs a combination of bosh & fissile in order to create the docker images you'll need # Once this step is done you can see images available via "docker images" make vagrant-prep # This uses fissile to create kubernetes service, deployment, stateful set definitions make kube # This is the final step, where it will create the 'cf' namespace in K8s and provision # all the definitions you created. make run # Watch the status of the pods, when everything is fully ready it should be usable. pod-status --watch # Currently the api role takes a very long time to do its migrations (~20 mins), to see if it's # doing migrations check the logs, if you see messages about migrations please be patient, otherwise # see the Troubleshooting guide. k logs -f cf:^api-[0-9]
Note: If every role does not go green in pod-status --watch
refer to Troubleshooting
The vagrant box is set up with default certs, passwords, ips, etc. to make it easier to run and develop on. So to access it and try it out all you should need is to get the CF client and connect to it. Once you've connected with the CF cli you should be able to do anything you can do with a vanilla Cloud Foundry.
You can get the the cf client here: github.com/cloudfoundry/cli
The way the vagrant box is created is by making a network with a static IP on the host. This means that you cannot connect to it from some other box.
# Attach to the endpoint (self-signed certs in dev mode requires skipping validation)
# cf-dev.io simply resolves to the static IP 192.168.77.77 that vagrant provisions
# This DNS resolution may fail on certain DNS providers that block resolution to 192.168.*
cf api --skip-ssl-validation https://api.cf-dev.io
Typically Vagrant box deployments encounter one of few problems:
-
uaa does not come up correctly (constantly not ready in pod-status)
In this case perform the following
# Delete everything in the uaa namespace k delete namespace uaa # Delete the pv related to uaa/mysql-data-mysql-0 k get pv # Find it k delete pv pvc-63aab845-4fe7-11e7-9c8d-525400652dd8 make uaa-run
-
api does not come up correctly and is not performing migrations (curl output in logs)
uaa is not functioning, try steps above
After careful consideration of the difficulty of the current install, we decided not to detail the instructions to install on bare K8s because it still requires far too much knowledge of SCF related systems and troubleshooting.
Please be patient while we work on a set of Helm charts that will help people easily install on any Kubernetes.
Name | Effect |
---|---|
run |
Set up SCF on the current node |
stop |
Stop SCF on the current node |
vagrant-box |
Build the Vagrant box image using packer |
vagrant-prep |
Shortcut for building everything needed for make run |
There are two places to see logs. Monit's logs, and the actual log files of each process in the container.
-
Monit logs
# Normal form using kubectl kubectl logs --namespace cf router-3450916350-xb3kf # Short form using k k logs cf:^router-[0-9]
-
Container process logs
# Normal form kubectl exec -it --namespace cf nats-0 -- env LINES=$LINES COLS=$COLS TERM=$TERM bash # Short form k ssh :nats # After ssh'ing, the logs are all in this directory for each process: cd /var/vcap/sys/log
On the Vagrant box, run the following commands:
make stop
make run
On the Vagrant box, when pod-status
reports all roles are running, enable diego_docker
support with
cf enable-feature-flag diego_docker
and execute the following commands:
make smoke
make cats
kubectl create -n cf -f kube/bosh-task/acceptance-tests-brain.yml
Deploy acceptance-tests-brain
as above, but first modify the environment to include INCLUDE=pattern
or
EXCLUDE=pattern
. For example to run just 005_sso_test.sh
and 014_sso_authenticated_passthrough_test.sh
, you
could add INCLUDE
with a value of sso
.
It is also possible to run custom tests by mounting them at the /tests
mountpoint inside the container. The
mounted tests will be combined with the bundled tests. However, to do so you will need to manually run it via docker.
To exclude the bundled tests match against names starting with 3 digits followed by an underscore (as in,
EXCLUDE=\b\d{3}_
) or explicitly select only the mounted tests with INCLUDE=^/tests/
.
Deploy acceptance-tests
after modifying the environment block to include CATS_SUITES=-suite,+suite
. Each suite is
separated by a comma. The modifiers apply until the next modifier is seen, and have the following meanings:
Modifier | Meaning |
---|---|
+ |
Enable the following suites |
- |
Disable the following suites |
= |
Disable all suites, and enable the following suites |
On the Vagrant box, run the following commands:
cd ~/scf
# Stop gracefully.
make stop
# Delete all fissile images.
docker rmi $(fissile show image)
# Re-create the images and then run them.
make images run
Try each of the following solutions sequentially:
- Run the
vagrant reload
command. - Run
vagrant halt && vagrant reload
command. - Manually stop the virtual machine and then run the
vagrant reload
command. - Run the
vagrant destroy -f && vagrant up
command and then runmake vagrant-prep run
on the Vagrant box.
You can target the cluster on the hardcoded cf-dev.io
address assigned to a host-only network adapter.
You can access any URL or endpoint that references this address from your host.
- Use the role manifest to expose the port for the mysql proxy role
- The MySQL instance is exposed at
192.168.77.77:3306
. - The default username is:
root
. - You can find the default password in the
MYSQL_ADMIN_PASSWORD
environment variable in the~/scf/bin/settings/settings.env
file on the Vagrant box.
-
Add a Git submodule to the BOSH release in
./src
. -
Mention the new release in
.envrc
-
Modify the
role-manifest.yml
:- Add new roles or change existing ones
- Add exposed environment variables (
yaml path: /configuration/variables
). - Add configuration templates (
yaml path: /configuration/templates
andyaml path: /roles/*/configuration/templates
).
-
Add defaults for your configuration settings to
~/scf/bin/settings/settings.env
. -
If you need any extra default certificates, add them to
~/scf/bin/settings/certs.env
. -
Add generation code for the certs to
~/scf/bin/generate-dev-certs.sh
. -
Add any opinions (static defaults) and dark opinions (configuration that must be set by user) to
./container-host-files/etc/scf/config/opinions.yml
and./container-host-files/etc/scf/config/dark-opinions.yml
, respectively. -
Change the
./Makefile
so it builds the new release:- Add a new target
<release-name>-release
. - Add the new target as a dependency for
make releases
.
- Add a new target
-
Test the changes.
-
Run the
make <release-name>-release compile images run
command.
- Make a change to component
X
, in its respective release (X-release
). - Run
make X-release compile images run
to build your changes and run them.
-
Edit
./container-host-files/etc/scf/config/role-manifest.yml
:-
Add the new exposed environment variables (
yaml path: /configuration/variables
). -
Add or change configuration templates:
yaml path: /configuration/templates
yaml path: /roles/*/configuration/templates
-
-
Add defaults for your new settings in
~/scf/bin/settings/settings.env
. -
If you need any extra default certificates, add them to
~/scf/bin/dev-certs.env
. -
Add generation code for the certificates here:
~/scf/bin/generate-dev-certs.sh
-
Rebuild the role images that need this new setting:
docker stop <role> docker rmi -f fissile-<role>:<tab-for-completion> make images run
Tip: If you do not know which roles require your new settings, you can use the following catch-all:
make stop docker rmi -f $(fissile show image) make images run
Note: Because this process involves cloning and building a release, it may take a long time.
This section describes how to bump all the submodules at the same time. This is the easiest way because we have scripts helping us here.
-
On the host machine run
bin/update-releases.sh <RELEASE>
to bump to the specified release of CF. This pulls the information about compatible releases, creates clones and bumps them.
-
Next up, we need the BOSH releases for the cloned and bumped submodules. Run
bin/create-clone-releases.sh
This command will place the log output for the individual releases into the sub directory
LOG/ccr
. -
With this done we can now compare the BOSH releases of originals and clones, telling us what properties have changed (added, removed, changed descriptions and values, ...).
On the host machine run
diff-releases.sh
This command will place the log output and differences for the individual releases into the sub directory
LOG/dr
. -
Act on configuration changes:
Important: If you are not sure how to treat a configuration setting, discuss it with the SCF team.
For any configuration changes discovered in step the previous step, you can do one of the following:
* Keep the defaults in the new specification. * Add an opinion (static defaults) to `./container-host-files/etc/scf/config/opinions.yml`. * Add a template and an exposed environment variable to `./container-host-files/etc/scf/config/role-manifest.yml`.
Define any secrets in the dark opinions file
./container-host-files/etc/scf/config/dark-opinions.yml
and expose them as environment variables.* If you need any extra default certificates, add them here: `~/scf/bin/dev-certs.env`. * Add generation code for the certificates here: `~/scf/bin/generate-dev-certs.sh`.
-
Evaluate role changes:
- Consult the release notes of the new version of the release.
- If there are any role changes, discuss them with the SCF team, follow steps 3 and 4 from this guide.
-
Bump the real submodule:
- Bump the real submodule and begin testing.
- Remove the clone you used for the release.
-
Test the release by running the
make <release-name>-release compile images run
command.
- Run the
vagrant reload
command. - Run the
make run
command.
- If our submodules are close to the
HEAD
of upstream and no merge conflicts occur, follow the steps described here. - If merge conflicts occur, or if the component is referenced as a submodule, and it is not compatible with the parent release, work with the SCF team to resolve the issue on a case-by-case basis.
-
Ensure that the Vagrant box is running.
-
ssh
into the Vagrant box. -
To tag the images into the selected registry and to push them, run the
make tag publish
command. -
This target uses the
make
variables listed below to construct the image names and tags:Variable Default Meaning IMAGE_REGISTRY empty The name of the trusted registry to publish to IMAGE_PREFIX scf The prefix to use for image names (must not be empty) IMAGE_ORG splatform The organization in the image registry BRANCH current branch The tag to use for the images -
To publish to the standard trusted registry run the
make tag publish
command, for example:make tag publish IMAGE_REGISTRY=docker.example.com/