OpenStack is, undoubtedly, a really huge ecosystem of cooperative services. Rally is a benchmarking tool that answers the question: “How does OpenStack work at scale?”. To make this possible, Rally automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking & profiling. Rally does it in a generic way, making it possible to check whether OpenStack is going to work well on, say, a 1k-servers installation under high load. Thus it can be used as a basic tool for an OpenStack CI/CD system that would continuously improve its SLA, performance and stability.
Rally was originally designed for machine testing. This means that communication with rally is only via API or CLI which communicates directly with database. But what if you want run some basic test ? You must have predefined your scenarios, contexts and some other stuff, next you must have some rally environment and doesn't matter where is lives, but you must have access and policy for connecting to this VM. Yes this is not all, but you must source your environment and call start task with knowing where your right task lives.
It’s little hard for some types of end-users, but what you want test some scenarios in scale or something else ? For every simple test you must spend a lot of time if you haven’t CI, machinery. I spent some time looking for some existing solution and finally I created simple Benchmark Dashboard for Horizon. Which communicate directly with database. This is little anti-patter, but if you overcome this shit, you can enjoy benefits from this dashboard.
Yes it's not fully integrated, but for basic work is sufficient.
After warm up part of this post we can start with some internals which Rally has. As was mentioned, it was designed for machines and not for humans. Isn't there way how you can easily start task directly from Python. For example if you want use some scenario you must import all pieces which is required by this task. This means that you have about five imports for simple scenario !
One of important features is server side serving of Rally stuff. This is pattern which we used in our Heat Extension where we load all available Stack templates and their environments. This simple behaviours makes benefits for end users which can easily start complex Infrastructure without any pre requirements. In this case we automatically load all services and their scenarios from custom path.
RALLY_ROOT = '/srv/rally/scenarios' /srv/rally/scenarios/tasks/scenarios/nova/boot-and-delete.yml /srv/rally/scenarios/tasks/scenarios/keystone/create.yml /srv/rally/scenarios/tasks/scenarios/whatever/awesome.yml
Rally is pluggable platform which has all stuff registered and if trying to start task directly from Python you must import every piece of shit, but you don't, because is there simple way how you can specify what you need. See two examples. Both loads same stuff because we load recursively all members.
RALLY_PLUGINS = [ 'rally.plugins.openstack', 'rally.plugins.common' ] RALLY_PLUGINS = [ 'rally.plugins', ]
In Django is one correct way how you can manage long running tasks and it is use Celery which is awesome and powerfull framework. But for installing without requirements is there simple implementation which creates Thread for every task, which is basically wrong and may cause overload your Horizon. For overwrite async task behaviour set your new awesome behaviour to benchmark_dashboard.utils.async.run_async
def run_async(method): # call Celery or whatever Thread(target=method, args=).start()
Finally you can see some pictures from my development. Nothing new here yet, I only render output from Rally that is little ugly again, but is effective and new customizations are welcome. Feel free and contribute.