openATTIC 3.7.0 has been released

We're happy to announce version 3.7.0 of openATTIC!

Version 3.7.0 is the first bugfix release of the 3.7 stable branch, containing fixes for multiple issues that were mainly reported by users.

There has been an issue with self-signed certificates in combination with the RGW proxy which is now configurable. We also improved the openATTIC user experience and adapted some of our frontend tests in order to make them more stable.

As mentioned in our last blog post our team was working on a Spanish translation. We are very proud to have the translation included in this release. Thank you Gustavo for your contribution.

Another highlight of the release is then newly added RBD snapshot management. openATTIC is now capable to create, clone, rollback, protect/unprotect and delete RBD snapshots. In addition it is also possible to copy RBD images now. Furthermore the "pool edit" feature received a slight update: we implemented the option to set the "EC overwrite" flag when editing erasure coded pools.

Read more…

Ceph and Ceph Manager Dashboard presentations at openSUSE Conference 2018

Last weekend, the openSUSE Conference 2018 took place in Prague (Czech Republic). Our team was present to talk about Ceph and our involvement in developing the Ceph manager dashboard, which will be available as part of the upcoming Ceph "Mimic" release.

The presentations were held by Laura Paduano and Kai Wagner from our team - thank you for your engagement! The openSUSE conference team did an excellent job in streaming and recording each session, and the resulting videos can already be viewed from their YouTube channel.

Ceph - The Distributed Storage Solution

Ceph Manager Dashboard

Ceph Dashboard v2 update

It's been a little over a month now since we reached Milestone 1 (feature parity with Dashboard v1), which was merged into the Ceph master branch on 2018-03-06.

After the initial merge, we had to resolve a few build and packaging related issues, to streamline the ongoing development, testing and packaging of the new dashboard as part of the main Ceph project.

With these teething problems out of the way, the team has started working on several topics in parallel. A lot of these are "groundwork/foundation" kind of tasks, e.g. adding UI components and backend functionality that pave the way to enable the additional user-visible management features.

In the meanwhile, we have submitted over 80 additional pull requests, of which more than 60 have been merged already.

In this post, I'd like to summarize some of the highlights and notable improvements we're currently working on or that have been added to the code base already. This is by no means a complete list - it's more a subjective selection of changes that caught my attention.

It's also noteworthy that we've already received a number of pull requests from Ceph community members outside of the original openATTIC team that started this project - we're very grateful for the support and look forward to future contributions!

Read more…

openATTIC 3.6.2 has been released

We're happy to announce version 3.6.2 of openATTIC!

Version 3.6.2 is the second bugfix release of the 3.6 stable branch, containing fixes for multiple issues that were reported by users.

One new feature that we want to point out is the internationalization. openATTIC has been translated to Chinese and German to be present on other markets as well. We are working on other translations, for example Spanish. If you would like to see your native language as part of openATTIC get in touch with us and we guide you how you can contribute and help us with the translation. We also had some packaging changes: Due to new requirements, we now use _fillupdir RPM macro in our SUSE spec file.

As usual the release comes with several usability enhancements and security improvements. For example we improved the modal deletion dialog in general - instead of just entering "yes" when deleting an item, it is now required to enter the item name itself - so users do not accidentally remove the wrong item. Furthermore we fixed incorrect API endpoint URLs for RGW buckets. We also adapted/changed some test cases - e.g. e2e tests were converted into Angular unit tests.

Read more…

The Ceph Dashboard v2 pull request is ready for review!

About a month ago, we shared the news that we started working on a replacement for the Ceph dashboard, to set the stage for creating a full-fledged, built-in web-base management tool for Ceph.

We're happy to announce that we have now finalized the preparations for the initial pull request, which marks our first milestone in this venture: reaching feature parity with the existing dashboard.

/galleries/ceph-dashboard-v2-screenshots-2018-02-23/dashboard-v2-health.png

Screen shot of the Ceph health dashboard

In fact, compared to the dashboard shipped with Ceph Luminous, we already included a number of additional features that were added after the Luminous release and added a simple authentication mechanism.

Read more…

openATTIC 2.0.21 has been released

We are very happy to announce the release of openATTIC version 2.0.21. This is mainly a bugfix release.

We would like to thank everyone who contributed to this release.

Your feedback, ideas and bug reports are very welcome. If you would like to get in touch with us, consider joining our openATTIC Users Google Group, visit our #openattic channel on irc.freenode.net or leave comments below this blog post.

See the list below for a more detailed change log and further references. The OP codes in brackets refer to individual Jira issues that provide additional details on each item. You can review these on our public Jira instance.

Read more…

How to do a Ceph cluster maintenance/shutdown

Last week someone asked on the ceph-users ML how to shutdown a Ceph cluster and I would like to summarize the steps that are neccessary to do that.

  1. Stop the clients from using your Cluster (this step is only neccessary if you want to shutdown your whole cluster)

  2. Important - Make sure that your cluster is in a healthy state before proceeding

  3. Now you have to set some OSD flags:

    # ceph osd set noout
    # ceph osd set nobackfill
    # ceph osd set norecover
    
    Those flags should be totally suffiecient to safely powerdown your cluster but you
    could also set the following flags on top if you would like to pause your cluster completely::
    
    # ceph osd norebalance
    # ceph osd nodown
    # ceph osd pause
    
    ## Pausing the cluster means that you can't see when OSDs come
    back up again and no map update will happen
    
  4. Shutdown your service nodes one by one

  5. Shutdown your OSD nodes one by one

  6. Shutdown your monitor nodes one by one

  7. Shutdown your admin node

After maintenance just do everything mentioned above in reverse order.

Ceph Manager Dashboard v2

The original Ceph Manager Dashboard that was introduced in Ceph "Luminous" started out as a simple, read-only view into various run-time information and performance data of a Ceph cluster, without authentication or any administrative functionality.

However, as it turns out, there is a growing demand for adding more web-based management capabilities, to make it easier for administrators that prefer a WebUI to manage Ceph over the command line. Sage Weil also touched upon this topic in the Ceph Developer monthly call in December and created an etherpad with some ideas for improvement.

/galleries/ceph-dashboard-v2-screenshots-2018-02-02/dashboard-v2-health.png

A preliminary screen shot of the Ceph health dashboard

After learning about this, we approached Sage and John Spray from the Ceph project and offered our help to implement the missing functionality. Based on our experiences in developing the Ceph support in openATTIC, we think we have a lot to offer in the form of code and experience in creating a Ceph administration and monitoring UI.

Read more…

How to create a vagrant VM from a libvirt vm/image

It cost's me some nerves and time to figure out how to create a vagrant image from a libvirt kvm vm and how to modify an existing one. Thanks to pl_rock from stackexchange for the awesome start.

  • First of all you have to install a new vm as usual. I've installed a new vm with Ubuntu 16.04 LTS. I'm not sure if it's really neccessary but set the root password to "vagrant", just to be sure.
  • Connect to your VM via ssh or terminal and do the following steps.

Read more…