.. Benchmarking Suite .. Copyright 2014-2017 Engineering Ingegneria Informatica S.p.A. .. Licensed under the Apache License, Version 2.0 (the "License"); .. you may not use this file except in compliance with the License. .. You may obtain a copy of the License at .. http://www.apache.org/licenses/LICENSE-2.0 .. Unless required by applicable law or agreed to in writing, software .. distributed under the License is distributed on an "AS IS" BASIS, .. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. .. See the License for the specific language governing permissions and .. limitations under the License. .. Developed in the ARTIST EU project (www.artist-project.eu), the .. CloudPerfect EU project (https://cloudperfect.eu/) and EasyCloud, .. an innovation activity supported by EIT Digital .. (https://www.eitdigital.eu) ################## Executions ################## Execution management grants the user the capability to run benchmarks on any other cloud service the user registered in the platform. With a single request, the user can trigger the execution of any number of workloads, on different sizes of different VM images of a given cloud service. The backend services, then, take care of the VMs provisioning process, benchmarking installation, execution and results collection or, in case of failure, logs and exception reporting to enable troubleshooting. If multiple VMs are expected to be created, they will be provisioned on the VDC sequentially to minimize the impact on the resource quota the user might have on the VDC. Similarly, if multiple benchmarks have been requested, they are executed sequentially within the same VM to minimise provisioning requests and, thus, overhead times and resource utilization. Execution management provides the user with capabilities to browse previous executions and to inspect collected metrics and logs. At submission time, the user can also control the visibility scope of collected metrics: results can be kept private, shared within the user organization or made available to any registered user. Execution a benchmark through the GUI ===================================== From the 'Executions' panel, a benchmark can be ran via the 'New Execution' button. Provider -------- The first thing to do is select a Service Provider to benchmark. Here you'll find the list of providers you registered in the 'Provider' panel. Benchmarked Resources --------------------- Once a provider has been selected, this section presents the list of services types available at that provider. Typically it consists of VM images and available flavours. Multiple service types can be entered here. Workloads --------- In this section you can select the workloads to execute on provider resources. Multiple workloads can be entered here. Configuration ------------- Entries in this section can control various aspects of the benchmark execution: Configure the workload ^^^^^^^^^^^^^^^^^^^^^^ Some workloads allow customization via *properties* and/or *environment variables* on the host VM. Refer to the documentation of each workload to learn more about them. Configure the benchmarking command ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This section is to set additional options to the benchmarking controller command. Configure the Docker image creation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This section is to set additional options for the benchmarking controller container. Sharing results --------------- By default, collected benchmarking metrics are private (i.e. visibile to you only). However, you might want to share them with a wider audience: if made *public*, they will be visibile to any registered user; when shared with an *organization*, all members of that organization will have access to them. *Note: organization membership is managed externally to the benchmarking suite (e.g. through a LDAP, Keycloak, etc.)* Browsing previous benchmark executions ======================================= By selecting the 'Executions' panel, most-recent benchmark executions are shown. This list contains your own execution and those ran by others, where they shared results with you. Both on-demand and scheduled executions appear here. You can tell what the trigger was by looking at the entry icon: *a rocket* for on on-demand and *gears* for scheduled executions. The list shows some general information about the execution. Selecting a workloads will show full details about it. Such details contain: - **start time**, **duration** and overall **status** of the execution. - details about the **service provider**. If you are not the owner of this provider, you might see a few things about it (i.e. name and description). - if the execution was triggered by a **schedule**, this section shows some basic info about it *as it was at execution time*. - for each combination of selected workload, image and size, there's a section with **metrics**, **logs** and **error messages**. Re-submitting an execution ========================== Completed executions can be re-ran as they were submitted. You're not constrained to the original submission request: everything can be modified/removed in the new submission. .. ********************** .. Single Step Execution .. ********************** .. The single step execution executes one or more benchmarks .. ********************** .. Step-by-Step Execution .. ********************** .. The Step-by-Step execution allows to .. This is of particular interest if, during the execution of the benchmarks, it is needed to run other tools like profilers