openATTIC 3.4.1 has been released

We are very happy to announce the release of openATTIC version 3.4.1 In this version we completely removed Nagios/PNP4Nagios graphs from the UI and installation in favor of Prometheus/Grafana.

We've continued with the integration of Ceph Luminous features. The 'allow_ec_overwrites' flag can now be set when creating erasure coded pools via the REST API. The UI part is currently under construction. Enabling the features "layering" and "striping" at once is also supported when creating an RBD now. Furthermore support for the new 'ceph health' format has been integrated.

The UI settings page has been extended to support the configuration of Grafana and gracefully handle a not properly entered config - which means it's no longer needed to set this configuration in /etc/sysconfig/openattic or /etc/default/openattic. The Salt-API could now be configured by using sharedsecret-key authentication, in addition to 'auto'. As usual we also improved some exsting UI features, this release, for example, contains help-text changes to provide users with more troubleshooting hints for possible solution.

Read more…

SUSE Enterprise Storage 5 Beta Program

openATTIC 3.x will be part of the upcoming SUSE Enterprise Storage 5 release, which is currently in beta testing. It will be based on the upstream Ceph "Luminous" release and will also ship with openATTIC 3.x and Salt/DeepSea for the orchestration, deployment and management.

If you would like to take a look at this release and help us with testing the new functionality provided by openATTIC and DeepSea without having to assemble the various pieces manually, please join our beta test program by following the instructions outlined on the SUSE Enterprise Storage 5 Beta Program web page.

We look forward to your feedback!

Update on the State of Ceph Support in openATTIC 3.x (June 2017)

A bit over a month ago, I posted about a few new Ceph management features that we have been working on in openATTIC 3.x after we finished refactoring the code base.

These have been merged into the trunk in the meanwhile, and the team has started working on additional features. In this post, I'd like to give you an update on the latest developments and share some screen shots with you.

Read more…

Sneak preview: Upcoming Ceph Management Features

Despite the number of disruptive changes that we went through in the past few weeks, e.g. moving our code base from Mercurial to git, relocating our infrastructure to a new data center, refactoring our code base for version 3.0, our developers have been busy working on expanding the Ceph management capabilities in openATTIC.

I'd like to highlight two of them that are nearing completion and should land in the master branch shortly.

Read more…

openATTIC 2.0.20 has been released

It is our great pleasure to announce the release of openATTIC version 2.0.20. This is a minor bugfix release, which also provides a number of small selected improvements, e.g. in the WebUI (styling, usability), installation and logging (now adds PID and process name to logs). Furthermore, we updated our documentation - especially the installation instructions as well as the developer documentation.

Read more…

Clean up and split your branch with git

There are several reasons why you may need completely refactor your working branch. The most common one is that you stumbled upon some things you fixed along your way resolving an issue. Your branch will grow over time until you are finished. Now you want to submit your code. You have to split it up into digestible pieces and maybe have to rewrite some WIP commits you made. Luckily we are using git! With git you can do it with a few commands with a safety belt on!

Read more…

Demo currently unavailable

You might have noticed already that our live demo on demo.openattic.org is down and not reachable at the moment.

This issue is caused by our hardware move to a new and more secure datacenter. Right now, we're trying to figure out what's the best approach to make the demo accessible again. This isn't as easy as before because we now have a dedicated firewall and a dmz for external services.

Therefore the demo.openattic.org URL will be redirected to Demo currently unavailable until we have a running demo again.

Update: It's not as easy as expected. We still need to investigate into the best approach. Coming soon...

Implementing a more scalable storage management framework in openATTIC 3.0

Over the course of the last years, we've been working on expanding and enhancing both our "traditional" local storage management functionality (NFS/CIFS/iSCSI on top of local attached disks) as well as the Ceph management features in openATTIC.

Along the way, it became more and more clear to us that our current approach for managing storage locally does not scale well, as it requires openATTIC to be installed on every node.

When openATTIC was originally started, this wasn't so much of a concern, but today's IT infrastructures are evolving and demand more flexibility and scalability. Also, our goal of making it possible for an administrator to make changes on the command line outside of openATTIC is difficult to achieve in the current local storage implementation, in which the Django models are considered to be the "single source of truth" of a server's storage configuration.

The ongoing development of additional Ceph management functionality based on DeepSea and Salt allowed us to gather a lot of experience in implementing a more scalable approach using these frameworks and make it possible to decouple openATTIC from the node delivering the actual service. Communicating with a Salt master via the Salt REST API also enables us to separate the management UI (openATTIC) from the admin node (the Salt master).

Based on these findings, we wanted to create a playground for our developers to apply the lessons learned to the openATTIC code base. We therefore moved the current openATTIC 2.0 implementation into a separate 2.x git branch and have started working on version 3.x in the current master branch. Note that this will not be a complete rewrite of openATTIC, but rather an adaption/refinement of the existing code base.

In addition to the already existing Ceph management functionality based on librados (e.g. Ceph Pool management, RBD management), we're currently working on adding more Ceph-based storage management functionality e.g. managing iSCSI targets as well as NFS volume management via NFS Ganesha.

The focus in this 3.0 branch will be on completing the Ceph-related management functionality first, while aiming at being able to implement the "traditional" storage management functionality using this framework (e.g. providing storage services based on node-local disks) at a later step. Salt already includes a large number of modules for these purposes.

As usual, we welcome your feedback and comments! If you have any ideas or if you can help with implementing some of these features, please get involved!