2020 World Single Distances Speed Skating Championships – Men's team sprint

The Men's team sprint competition at the 2020 World Single Distances Speed Skating Championships was held on February 13, 2020.[1][2]

Men's team sprint
at the 2020 World Single Distances Speed Skating Championships
VenueUtah Olympic Oval
LocationSalt Lake City, United States
DatesFebruary 13
Competitors24 from 8 nations
Teams8
Winning time1:18.18
Medalists
    Netherlands
    China
    Norway

Results

The race was started at 15:40.[3]

RankPairLaneCountryTimeDiff
4c Netherlands
Dai Dai Ntab
Kai Verbij
Thomas Krol
1:18.18
1c China
Gao Tingyu
Wang Shiwei
Ning Zhongyan
1:18.53+0.35
2c Norway
Bjørn Magnussen
Håvard Holmefjord Lorentzen
Odin By Farstad
1:19.54+1.36
41s Japan
Yuma Murakami
Yamato Matsui
Masaya Yamada
1:19.59+1.41
54s  Switzerland
Oliver Grob
Christian Oberbichler
Livio Wenger
1:20.03+1.85
63s Kazakhstan
Artur Galiyev
Stanislav Palkin
Alexander Klenko
1:20.39+2.21
2s Russia
Ruslan Murashov
Viktor Mushtakov
Pavel Kulizhnikov
Did not finish
3c Canada
Gilmore Junio
Laurent Dubreuil
Antoine Gélinas-Beaulieu
Disqualified
gollark: Oh, so you mean this `hdr` goes at the start and the `dofs` thing tells you where the bit appended to the end is?
gollark: Perhaps the headers should also store the location of the last header, in case of [DATA EXPUNGED].
gollark: There are some important considerations here: it should be able to deal with damaged/partial files, encryption would be nice to have (it would probably work to just run it through authenticated AES-whatever when writing), adding new files shouldn't require tons of seeking, and it might be necessary to store backups on FAT32 disks so maybe it needs to be able of using multiple files somehow.
gollark: Hmm, so, designoidal idea:- files have the following metadata: filename, last modified time, maybe permissions (I may not actually need this), size, checksum, flags (in case I need this later; probably just compression format?)- each version of a file in an archive has this metadata in front of it- when all the files in some set of data are archived, a header gets written to the end with all the file metadata plus positions- when backup is rerun, the system™ just checks the last modified time of everything and sees if its local copies are newer, and if so appends them to the end; when it is done a new header is added containing all the files- when a backup needs to be extracted, it just reads the end and decompresses stuff at the right offset
gollark: I don't know what you mean "dofs", data offsets?

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.