Commit Graph

10 Commits

Author SHA1 Message Date
Chris Dent 3b040f58f7 Move non-nested perfload shell commands to script
The script was embedded in the playbook, which leads to some
pain with regard to editing and reviewing as well as manual
testing.

The disadvantage of doing this is that it can make jobs
somewhat less portable between projects, but in this case
that's not really an issue.

There are further improvements that can made to remove duplication
between the nested and non-nested versions of these jobs. This
change will make it easier for those changes to be made as
people have time.

Change-Id: Ia6795ef15a03429c19e66ed6d297f62da72cc052
2019-06-20 12:38:08 +01:00
Chris Dent 8723bd7772 Nested provider performance testing
This change duplicates the ideas started in with the placement-perfload
job and builds on it to create a set of nested trees that can be
exercised.

In placement-perfload, placeload is used to create the providers. This
proves to be cumbersome for nested topologies so this change starts
a new model: Using parallel [1] plus instrumented gabbi to create
nested topologies in a declarative fashion.

gate/perfload-server.sh sets up placement db and starts a uwsgi server.

gate/perfload-nested-loader.sh is called in the playbook to cause gabbi
to create the nested topology described in
gate/gabbits/nested-perfload.yaml. That topology is intentionally very
naive right now but should be made more realisitc as we continue to
develop nested features.

There's some duplication between perfload.yaml and
nested-perfload.yaml that will be cleared up in a followup.

[1] https://www.gnu.org/software/parallel/ (although the version on
ubuntu is a non-GPL clone)

Story: 2005443
Task: 30487
Change-Id: I617161fde5b844d7f52dc766f85c1b9f1b139e4a
2019-06-20 12:37:28 +01:00
Chris Dent 910b466c50 perfload with written allocations
One of the needs we've discussed for perfload is making sure
it is measuring when some inventory has been used.

Here, we change the perload job so that it creates the 1000 providers,
measures getting allocation_candidates and then, in a loop of 99, gets
a limited set of candidates, writes the first one back as an allocation
for a random consumer, project and user. At each iteration it measures
again.

This will make the log file a lot longer, but that's not a significant
issue: the numbers that matter will either be near the top or near the
end. If they are weird, looking in the middle will be informative. We
can tweak it.

This, as usual, is one of many ways to accomplish gathering some data.
Other options might include parallelizing the writes, but in this case
we are trying to see the impact of code on a single request, not on
concurrency.

At some point we will want to add nested and sharing into this mix.

Change-Id: I74b64a25f2be8fbbd01b3a3b438bba68de04b269
2019-06-07 14:27:04 +00:00
OpenDev Sysadmins 931a9e1242 OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:41:22 +00:00
Chris Dent 65da83aece Use sync_on_startup in placement-perfload job
Use the [placement_database]/sync_on_startup config setting to
have the database schema synchronized during web-service startup
rather than through a separate call to placement-manage.

This is done for two reasons:

* It provides a reaonable test that it works, which is not present
  in other integration tests.
* Until Id9bc515cee71d629b605da015de39d1c9b0f8fc4 merges it will
  demonstrate the bug described in the story linked below.

Couple of things to note:

* The tempest job will continue to exercise placement-manage, as it
  has always done.
* The bug (in the story) doesn't impact the behavior of the API, it
  merely impacts what is or is not logged. In the
  logs/placement-api.log generated in the perfload job for this change
  there will be an initial burst of DEBUG and INFO logging, but then
  only request logging. This should be corrected by
  Id9bc515cee71d629b605da015de39d1c9b0f8fc4

Change-Id: Ib7f5cdfa3b314af7681d594dccb553bddb764224
Story: 2005187
2019-03-09 12:24:22 +00:00
Chris Dent 7d0a37dfb1 Also time placeload when doing perfload
The previous iteration was only timing how long it took to GET some
resource providers after we create 1000 of them.

It's also useful to know how long it takes to create them.

Neither of these timings are robust because we do not have reliable
sameness from virtual machine to virtual machine (especially between
cloud providers) but they make it possible to become aware of
unusual circumstances.

To avoid extraneous noise in the placement-perf.txt file, set +x
and set -x surrounds the commands that create that output.

Change-Id: I4da2703dc4e8b306d004ac092d436d85669caf0f
2019-02-05 11:20:54 +00:00
Chris Dent f0caa12ff0 Adjust database connection pool config in perfload tests
The perfload tests can run out of connections in the sqlalchemy
connection pool when using the default configuration. This can
lead to distracting noise in the results [1] and potentially
failures. Since it is easy to adjust the settings for the job,
let's do that.

The perfload web service is set up for enabling quite wide
concurrency, so the database connections need to be as well.

[1] http://logs.openstack.org/99/632599/1/check/placement-perfload/8c2a0ad/logs/

Change-Id: Id88fb2eaefaeb95208de524a827a469be749b3db
2019-01-23 13:56:55 +00:00
Chris Dent 8bace2bdf3 Don't create placement.conf in perfload.yaml
With the merge of Iefa8ad22dcb6a128293ea71ab77c377db56e8d70 placement
can run without a config file, so in this change we remove the
creation of an empty one. All the relevant config is managed by
environment variables, as provided by oslo.config 6.7.0.

Change-Id: Ibf285e1da57be57f8f66f3c20d5631d07098ec1c
2018-12-04 15:56:03 +00:00
Chris Dent 3ae8653338 Use a smaller base job for the perfload run
In this job we install placement by hand, based on the
instructions in
https://docs.openstack.org/placement/latest/contributor/quick-dev.html
and run the placeload command against it. This avoids a lot of node
set up time.

* mysql is installed, placement is installed, uwsgi is installed
* the database is synced
* the service started, via uwsgi, which run with 5 processs each
  with 25 threads, otherwise writing the resource providers is
  very slow and causes errors in placeload. It's an 8 core vm.
* placeload is called

A post.yaml is added to get the generated logs back to zuul.

Change-Id: I93875e3ce1f77fdb237e339b7b3e38abe3dad8f7
2018-11-30 15:00:57 +00:00
Chris Dent e6545dc2b2 Add a perfload job.
This adds the placeload perf output as its own job, using a
very basic devstack set up. It is non-voting. If it reports as
failing it means it was unable to generate the correct number
of resource providers against which to test.

It ought to be possible to do this without devstack, and thus speed
things up, but some more digging in existing zuul playbooks is
needed first, and having some up to date performance info is useful
now.

Change-Id: Ic1a3dc510caf2655eebffa61e03f137cc09cf098
2018-11-30 14:59:47 +00:00