Update on the state of Ceph Support in openATTIC 3.x (July 2017)

A little bit less than a month ago Lenz Grimmer gave an overview about the current state of our development in openATTIC 3.x.

We made a lot of good progress in the meanwhile and I'm very proud to announce that NFS Gateway Management, RGW Bucket Management and Prometheus/Grafana made it into our newest openATTIC 3.3.0 release as well as a lot of UI usability improvements.

The relationship between DeepSea and openATTIC is getting closer - that's why we recommend deploying and managing your Ceph Cluster with DeepSea to be able to use the full functionality and the latest features of openATTIC.

Currently we're working on an installation guide for DeepSea but for now you could take a look at the README.md or my blog post about how to deploy DeepSea.

If you want to use a Grafana instance installed on a different node you just have to change the default settings in /etc/sysconfig/openattic to fit to your needs:

# Host of the Grafana instance which shall be used by openATTIC.

# Default port necessary as DeepSea doesn't provide one. Also, 80 is the default port of DeepSea.

# The username to log into Grafana.

# The password to log into Grafana.

# The HTTP Scheme to be used. Either 'http' or 'https'.

For the next release we will be working on making the Grafana settings configurable via the UI

NFS Ganesha Management

The management and configuration of NFS Gateway hosts (:issue: OP-2195) was released in oA 3.2.0. This feature uses the Salt REST API to communicate with DeepSea for the required target configuration on the remote NFS nodes. This way openATTIC is now capable of managing NFS shares on top of CephFS or S3 buckets on multiple nodes and supporting a lot of NFS features for NFSv3/4. The underlying NFS server functionality is provided by NFS Ganesha.

Rados Gateway Bucket Management

The management and configuration of RGW users and access keys (:issue: OP-2368) was released in oA 3.2.0. In 3.3.0, we added the functionality to manage RGW Buckets on multiple nodes.

Ceph Monitoring with Grafana/Prometheus

The decision was made (:issue: OP-829)! We replaced our current Nagios and pnp4nagios with Grafana and Prometheus. You can switch between the different Dashboards or directly access OSD, Pool or Host stats within the dedicated panel by selecting a specific item in the list. We're almost done regarding the UI-part, only the RBD graphs are currently still based on the old Nagios implementation. This will be removed as soon as we've written a Prometheus exporter for RBD's to be able to collect useful data. We're delivering four dashboards by default which are:

  • Ceph - Cluster (default) - Gives you an overview about the current state of your cluster (Status Cluster, Status Monitors, Cluster Capacity...)
  • Ceph - Pools - Details of a selected Pool (Objects, IOPS, Throughput)
  • Ceph - OSD - Details of a selected OSD (Utilization, PGs, Utilization Variance, Latency, OSD Storage)
  • Node Statistics - Details of a selected Host (CPU, Memory, Disk I/O, Filesytem Fullness...)

Next step is to remove Nagios from the backend as well.

UI Improvements

We improved the UI in general and aim to improve the usability with every release. One highlight in this release is the "System -> Settings" Page where you could specify your Salt API host as well as the credentials for the RGW hosts. We also added a live status/validation check of the configuration so you get direct feedback if your settings are correct or not.


Currently, openATTIC 3.x is available in packaged form for openSUSE Leap via the filesystems:openATTIC:3.x/openattic package repository on the openSUSE Build Service. As already mentioned in our last update, we plan to add packages for other distributions as well but with our closer integration into DeepSea it depends on the availablility of DeepSea for these platforms. If anybody is willing to give us a hand on this, it would be fabulous and would help to speed up this process.

As usual, we appreciate any feedback or comments you might have - please get in touch with us!


Comments powered by Disqus