ABCDEF
1
Last ChangeTest IDBuildTesterResults ObservedDiscussion
2
8/27/2019 16:53:59WW1DB6JsonPublishing blocks with watch_work = true and false. True: Slower BPS when network saturated and conf. longer.
3
8/28/2019 9:51:54RPC1DB6robotnSetting "difficult" at live net(16X), but "multiplier" at 5x. Generates blocks at 5x and above, ignoring "diffulty" setting
4
8/29/2019 3:42:15AD1DB6DotcomActive difficulty is stable when elections are resolved in a timely manner. This is not the case for single account chain spam currently
5
9/2/2019 2:43:21NW1DB7RobotnGetting blockshash response from "send" command is pretty much instant even during heavy load.
6
9/2/2019 2:43:23AD1DB7RobotnLooks stable, peaked at 2.25 during heavy load.
7
9/2/2019 2:43:25NW1DB7SraymanDid notice on live during spam that block_create/process could potentially be slow also - not seeing that on DB7.
8
9/2/2019 2:43:26CA1DB7Sraymangap_cache fluctates quite a bit, but stays capped consistently at 256 during block broadcasting
votes_cache stays at 4096 after the node has been running for a while and only a restart usually reduces it
inactive_votes_cache_count stays at 2048 after the node has been running a while and only a restart usually reduces it
9-1-2019 Tests of 710626 recorded confirmations from the first instance of a block in confirmation_history there are 38,175 showing duration=0
9
9/2/2019 2:43:28RPC2DB7Sraymanpeers RPC with peer_details=true returns all peers with node_
10
9/2/2019 2:44:57RD1DB7Dotcom(Kamikaze) successfully endured a saturating spam test with RocksDB, having performed better than LMDB on ARM. Voting was disabled.
11
9/2/2019 2:46:49VG1DB7DotcomVote-by-hash packing is inline with previous versions at high BPS, and more consistent at lower BPS
12
9/3/2019 15:50:42VG1DB8DotcomVote-by-hash can still be improved, but the observed changes reflect the code. At 250cps, mostly 12-hash packs are seen, with some 8 from batched confirm_req and many 1 from code that doesn't yet use vote batching
13
9/3/2019 15:51:42WW2DB8DotcomRework tested and working, an issue was found where it wouldn't use work peers but fixed from DB9 onwards
14
9/4/2019 14:27:58WW1DB9Robotn/DotcomTested locally and between me and Dotcom using full fanout. Worked on both "send" and "process", and within expected timeperiod when using full fanout(work watch period at default 5 seconds).
15
9/5/2019 18:32:17CH2DB9/10RobotnHave a node with zero voting rep, was behind because of some testing with high online weight minimum to test WW1. After reset to 60 mill online weight minimum it does not catch up on cemented blocks(about 30k behind). Have waited 15 min+. New blocks get cemented fine. On my rep node I did the same(high minimum online weight etc). On this one cemented is in sync.
16
9/5/2019 18:32:16CH2DB10robotnset frontiers_confirmation = "always", restarted, and cemented was in sync within 3-5 minutes.
17
RD1DB9/10SraymanRunning RocksDB on Windows. Consumes a high amount of RAM during load tests. Node crashed on one attempt at 300 BPS, but then survived another attempt, though the RAM usage increased to 4.5 GB. Overall disk usage seems lower compared to lmdb on the other hand.