|
1 | 1 | # Benchmarking
|
2 | 2 |
|
| 3 | +## Test Phases |
| 4 | + |
| 5 | +1. Phase 1 (MVP) |
| 6 | + - the main goal is to understand general limitations and get metrics for the core protocol |
| 7 | + - just one txn type |
| 8 | + - dedicated, but simple environment (simple deployment of validators similar to the current Test Net; no observers, no sentry nodes, no mediators) |
| 9 | + |
| 10 | +2. Phase 2 |
| 11 | + - the main goal is to test DCL specific (DCL business logic and txn types; deployment specific) |
| 12 | + - more txn types |
| 13 | + - more workload scenarios |
| 14 | + - simulate deployment as close to production as possible |
| 15 | + |
| 16 | + |
3 | 17 | ## Client Side Metrics
|
4 | 18 |
|
5 | 19 | * `response time` (percentiles): the time between client's initial request and the last byte of a validator response
|
6 | 20 | * `requests per second (RPS)`: number of requests per second
|
7 |
| - * `transactions per second (TPS)`: number of write requests (txns) per second |
8 |
| - * **Note** to measure that on client side write requests should use to `broadcast_tx_commit` requetss |
| 21 | +* `transactions per second (TPS)`: number of write requests (txns) per second |
| 22 | + * **Note** to measure that on client side write requests should use to `broadcast_tx_commit` requetss |
9 | 23 | * `number of clients`: number of concurrent clients that ledger serves
|
10 | 24 | * (optional) `throughtput` (in/out): number of KB per second. Marked as optional since we don't expect much in/out data due to relatively small txns payloads.
|
11 | 25 |
|
@@ -44,21 +58,23 @@ The following [metrics](https://docs.cosmos.network/master/core/telemetry.html#s
|
44 | 58 |
|
45 | 59 | Options:
|
46 | 60 |
|
47 |
| -* dedicated, close to production as much as possible (the best option) |
48 |
| -* local in-docker (for PoC / debugging only) |
49 |
| -* TestNet, not good: not a clean environment, would be spammed and might be broken by the load testing |
| 61 | +* dedicated, close to production as much as possible (the best option) |
| 62 | +* dedicated, simple deployment of validators similar to the current Test Net (no observers, no sentry nodes, no mediators); |
| 63 | + good for initial test phase |
| 64 | +* local in-docker (for PoC / debugging only) |
| 65 | +* TestNet, not good: not a clean environment, would be spammed and might be broken by the load testing |
50 | 66 |
|
51 | 67 | Notes:
|
52 | 68 |
|
53 | 69 | * For the moment it's not clear enough what production setup will look like, in particular:
|
54 |
| - * number of vendor companies (number of validators) |
| 70 | + * number of validators |
55 | 71 | * type of external endpoints, options are [Cosmos SDK / Tendermint endpoints](https://docs.cosmos.network/master/core/grpc_rest.html)
|
56 | 72 | * type and number of proxies for validator-validator and client-validator connections
|
57 | 73 |
|
58 |
| -Current assumptions: |
| 74 | +Current assumptions for production: |
59 | 75 |
|
60 |
| -* multiple companies (vendors) will manage one/multiple validators |
61 |
| -* while some common requirements and recommendations would be provided each vendor will deploy the infrastructure independently with some freedom regarding internal architecture |
| 76 | +* multiple companies will manage one/multiple validators |
| 77 | +* while some common requirements and recommendations would be provided each company will deploy the infrastructure independently with some freedom regarding internal architecture |
62 | 78 | * there would be a set of external (for clients) and internal (for validators to support txn flows) endpoints
|
63 | 79 | * most likely observer nodes along with REST http servers with clients authentication would be in front of the client endpoints
|
64 | 80 |
|
|
0 commit comments