Archive datasets flow
archive_datasets_flow(job_id, dataset_ids=None)
Prefect flow to archive a list of datasets. Corresponds to a "Job" in Scicat. Runs the individual archivals of the single datasets as subflows and reports the overall job status to Scicat.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_ids
|
List[str]
|
description |
None
|
job_id
|
UUID
|
description |
required |
Raises:
Type | Description |
---|---|
e
|
description |
Source code in backend/archiver/flows/archive_datasets_flow.py
445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 |
|
check_free_space_in_LTS()
Prefect task to wait for free space in the LTS. Checks periodically if the condition for enough free space is fulfilled. Only one of these task runs at time; the others are only scheduled once this task has finished, i.e. there is enough space.
Source code in backend/archiver/flows/archive_datasets_flow.py
212 213 214 215 216 217 218 |
|
copy_datablock_from_LTS(dataset_id, datablock)
Prefect task to move a datablock (.tar.gz file) to the LTS. Concurrency of this task is limited to 2 instances at the same time.
Source code in backend/archiver/flows/archive_datasets_flow.py
234 235 236 237 238 239 240 241 242 243 244 |
|
create_datablocks_flow(dataset_id)
Prefect (sub-)flow to create datablocks (.tar.gz files) for files of a dataset and register them in Scicat.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
str
|
Dataset id |
required |
Returns:
Type | Description |
---|---|
List[DataBlock]
|
List[DataBlock]: List of created and registered datablocks |
Source code in backend/archiver/flows/archive_datasets_flow.py
313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 |
|
move_data_to_LTS(dataset_id, datablock)
Prefect task to move a datablock (.tar.gz file) to the LTS. Concurrency of this task is limited to 2 instances at the same time.
Source code in backend/archiver/flows/archive_datasets_flow.py
226 227 228 229 230 231 |
|
move_datablocks_to_lts_flow(dataset_id, datablocks)
Prefect (sub-)flow to move a datablock to the LTS. Implements the copying of data and verification via checksum.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
str
|
description |
required |
datablock
|
DataBlock
|
description |
required |
Source code in backend/archiver/flows/archive_datasets_flow.py
263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 |
|
on_get_origdatablocks_error(dataset_id, task, task_run, state)
Callback for get_origdatablocks tasks. Reports a user error.
Source code in backend/archiver/flows/archive_datasets_flow.py
49 50 51 52 |
|
sleep_for(time_in_seconds)
Sleeps for a given amount of time. Required to wait for the LTS to update its internal state. Needs to be blocking as it should prevent the following task to run.
Source code in backend/archiver/flows/archive_datasets_flow.py
204 205 206 207 208 209 |
|
verify_datablock_in_verification(dataset_id, datablock)
Prefect Task to verify a datablock in the LTS against a checksum. Task of this type run with no concurrency since the LTS does only allow limited concurrent access.
Source code in backend/archiver/flows/archive_datasets_flow.py
254 255 256 257 258 259 |
|