Commit Graph

19 Commits

Author SHA1 Message Date
melanie witt 9171aae39f Make perfload jobs fail if write allocation fails
This uses curl -f when writing an allocation in order to detect when
the server has responded with a HTTP error code and then fails the job
if so. The idea behind this is to catch when PUT
/allocations/{consumer_uuid} required parameters change and the
perfload jobs need to be updated.

The curl -S option is also added to show the error if curl fails.

Change-Id: Ic06e64b1031ff37d7ada55449ae71cd39b1298a2
2022-04-01 23:40:33 +00:00
Balazs Gibizer a57215e8b6 Fix perfload jobs after consumer_types
Since consumer_types was added to the API in
I24c2315093e07dbf25c4fb53152e6a4de7477a51 the two perfload jobs are
getting errors from placement as they are using the latest microversion
but does not specify the consumer_type when creating allocations.

The server could not comply with the request since it is either malformed
or otherwise incorrect.\n\n JSON does not validate: 'consumer_type' is a
required property

This patch changes the allocation request to specify TEST as consumer
type.

Change-Id: I31500e3e6df5717d6bdb6ed7ed43325653d49be5
2022-02-07 16:47:17 +01:00
melanie witt 4b95c078cc Update perfload jobs for python3
Change-Id: Ie1cd1286797d89b50ead8e4ca87d5c4862b7524b
2020-08-05 01:54:08 +00:00
Chris Dent ed03085187 Add apache benchmark (ab) to end of perfload jobs
Start the process of reporting some concurrency numbers by including
a 500 x 10 'ab' run against the query URL used in each perfload job.

There's duplication removal that could be done here, but we leave
that until we've determined if this is working well.

The PLACEMENT_URL is updated to use 127.0.0.1 instead of localhost;
ab will attempt to use the IPV6 version of localhost if that's the
case, and we've not bound the placement server to that interface.

The timeout on the placement-nested-perfload job has been raised to
1 hour as the default 30 minutes is leading to a timeout. If that's
still not enough we'll explore lowering concurrency.

We will quite likely need to adapt the mysql configuration if we
intend to continue down this road.

Change-Id: Ic0bf2ab666dab546dd7b03955473c246fd0f380a
2019-08-06 09:18:44 +01:00
Chris Dent 07d7749cff Implement a more complex nested-perfload topology
This changes gabbits/nested-perfload.yaml to create a tree of
providers based on one of the compute nodes in the NUMANetworkFixture
used in the functional tests. For the time being only one type of
compute node is created (of which there will be 1000 instances).
Room is left for future expansion as requirements expand.

The resulting hierarchy has 7 resource providers.

The allocation candidates query is:

GET /allocation_candidates?
    resources=DISK_GB:10&
    required=COMPUTE_VOLUME_MULTI_ATTACH&
    resources_COMPUTE=VCPU:1,MEMORY_MB:256&
    required_COMPUTE=CUSTOM_FOO&
    resources_FPGA=FPGA:1&
    group_policy=none&
    same_subtree=_COMPUTE,_FPGA

This is a step in the right direction but is not yet a complete
exercising of all the nested functionality. It is, however, more
complex than prior, notably testing 'same_subtree'. We should
continue to iterate to get it doing more.

Change-Id: I67d8091b464cd7b875b37766f52818a5a2faa780
Story: 2005443
Task: 35669
2019-08-06 09:18:39 +01:00
Chris Dent 7464ff6e24 Run nested-perfload parallel correctly
While experimenting with expanding the nested perfload tests,
it became clear that the call to parallel was not working as
expected because the documents were misread. With help from
Tetsuro the correct incantation was determined so that we
use 50% of available CPUs.

This should leave some space for the database and the web
server.

Subsequent patches will add a more complicated nested structure.

Co-Authored-By: Tetsuro Nakamura <tetsuro.nakamura.bc@hco.ntt.co.jp>
Change-Id: Ie4809abc31212711b96f69e5f291104ae761059e
2019-08-06 09:14:58 +01:00
Chris Dent ed0af2e4aa Fix up some inaccuracies in perfload comments and logs
The review of the addition of nested perfload (in
I617161fde5b844d7f52dc766f85c1b9f1b139e4a ) identified some
inaccuracies in the comments and logs. This fixes some of
those.

It does not, however, fix some of the duplication between the
two runner scripts. This will be done later.

Change-Id: I9c57125e818cc583a977c8155fcefcac2e3b59df
2019-07-01 15:58:29 +00:00
Matt Riedemann 41287a7464 Remove gate/post_test_hook.sh
The post_test_hook script in the gate/ directory is a carry-over
from the split from the nova repo and is not used in placement
so we can delete it.

Change-Id: Id64c55f7c5ce730b8f1fa7cf17ff083d65e6bf78
2019-06-28 14:38:38 -04:00
Chris Dent 3b040f58f7 Move non-nested perfload shell commands to script
The script was embedded in the playbook, which leads to some
pain with regard to editing and reviewing as well as manual
testing.

The disadvantage of doing this is that it can make jobs
somewhat less portable between projects, but in this case
that's not really an issue.

There are further improvements that can made to remove duplication
between the nested and non-nested versions of these jobs. This
change will make it easier for those changes to be made as
people have time.

Change-Id: Ia6795ef15a03429c19e66ed6d297f62da72cc052
2019-06-20 12:38:08 +01:00
Chris Dent 8723bd7772 Nested provider performance testing
This change duplicates the ideas started in with the placement-perfload
job and builds on it to create a set of nested trees that can be
exercised.

In placement-perfload, placeload is used to create the providers. This
proves to be cumbersome for nested topologies so this change starts
a new model: Using parallel [1] plus instrumented gabbi to create
nested topologies in a declarative fashion.

gate/perfload-server.sh sets up placement db and starts a uwsgi server.

gate/perfload-nested-loader.sh is called in the playbook to cause gabbi
to create the nested topology described in
gate/gabbits/nested-perfload.yaml. That topology is intentionally very
naive right now but should be made more realisitc as we continue to
develop nested features.

There's some duplication between perfload.yaml and
nested-perfload.yaml that will be cleared up in a followup.

[1] https://www.gnu.org/software/parallel/ (although the version on
ubuntu is a non-GPL clone)

Story: 2005443
Task: 30487
Change-Id: I617161fde5b844d7f52dc766f85c1b9f1b139e4a
2019-06-20 12:37:28 +01:00
Chris Dent e6545dc2b2 Add a perfload job.
This adds the placeload perf output as its own job, using a
very basic devstack set up. It is non-voting. If it reports as
failing it means it was unable to generate the correct number
of resource providers against which to test.

It ought to be possible to do this without devstack, and thus speed
things up, but some more digging in existing zuul playbooks is
needed first, and having some up to date performance info is useful
now.

Change-Id: Ic1a3dc510caf2655eebffa61e03f137cc09cf098
2018-11-30 14:59:47 +00:00
Chris Dent 6dcdc85d6c Add trait query to placement perf check
This updates the EXPLANATION and sets the pinned version placeload
to the just release 0.3.0. This ought to hold us for a while. If
we need to do this again, we should probably switch to using
requirements files in some fashion, but I'm hoping we can avoid
that until later, potentially even after placement extraction
when we will have to moving and changing this anyway.

Change-Id: Ia3383c5dbbf8445254df774dc6ad23f2b9a3721e
2018-08-16 18:32:12 +01:00
Chris Dent 3673258049 Add explanatory prefix to post_test_perf output
The pirate on crack output of placeload can be confusing
so this change adds a prefix to the placement-perf.txt log
file so that it is somewhat more self-explanatory.

This change also pins the version of placeload because the
explanation is version dependent.

Change-Id: I055adb5f6004c93109b17db8313a7fef85538217
2018-08-16 18:21:47 +01:00
Chris Dent fc45edca78 Add placement perf info gathering hook to end of nova-next
This change adds a post test hook to the nova-next job to report
timing of a query to GET /allocation_candidates when there are 1000
resource providers with the same inventory.

A summary of the work ends up in logs/placement-perf.txt

Change-Id: Idc446347cd8773f579b23c96235348d8e10ea3f6
2018-08-14 15:42:08 +01:00
Dan Smith 5c837673d6 Make nova-manage db purge take --all-cells
This makes purge iterate over all cells if requested. This also makes our
post_test_hook.sh use the --all-cells variant with just the base config
file.

Related to blueprint purge-db

Change-Id: I7eb5ed05224838cdba18e96724162cc930f4422e
2018-03-08 09:26:49 -08:00
Dan Smith bc54f4de5e Add simple db purge command
This adds a simple purge command to nova-manage. It either deletes all
shadow archived data, or data older than a date if provided.

This also adds a post-test hook to run purge after archive to validate
that it at least works on data generated by a gate run.

Related to blueprint purge-db

Change-Id: I6f87cf03d49be6bfad2c5e6f0c8accf0fab4e6ee
2018-03-07 10:35:32 -08:00
Dan Smith 2f75e7a404 Run post-test archive against cell1
Change-Id: I4af326fe66f0cf24ede8a8b7a8ce0e528c4f437c
2018-03-07 10:35:32 -08:00
Matt Riedemann a6ab799b35 Check for leaked server resource allocations in post_test_hook
The post_test_hook.sh runs in the nova-next CI job. The 1.0.0
version of the osc-placement plugin adds the CLIs to show consumer
resource allocations.

This adds some sanity check code to the post_test_hook.sh script
to look for any resource provider (compute nodes) that have allocations
against them, which shouldn't be the case for successful test runs
where servers are cleaned up properly.

Change-Id: I9801ad04eedf2fede24f3eb104715dcc8e20063d
2018-02-24 02:27:38 +00:00
Sean Dague 38a72d7118 move gate hooks to gate/
We prevent a lot of tests from getting run on tools/ changes given
that most of that is unrelated to running any tests. By having the
gate hooks in that directory it made for somewhat odd separation of
what is test sensitive and what is not.

This moves things to the gate/ top level directory, and puts a symlink
in place to handle project-config compatibility until that can be
updated.

Change-Id: Iec9e89f0380256c1ae8df2d19c547d67bbdebd65
2017-01-04 11:05:16 +00:00