Archive datasets flow
archive_datasets_flow(job_id, dataset_ids=None)
Prefect flow to archive a list of datasets. Corresponds to a "Job" in Scicat. Runs the individual archivals of the single datasets as subflows and reports the overall job status to Scicat.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_ids
|
List[str]
|
description |
None
|
job_id
|
UUID
|
description |
required |
Raises:
Type | Description |
---|---|
e
|
description |
Source code in backend/archiver/flows/archive_datasets_flow.py
465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 |
|
check_free_space_in_LTS()
Prefect task to wait for free space in the LTS. Checks periodically if the condition for enough free space is fulfilled. Only one of these task runs at time; the others are only scheduled once this task has finished, i.e. there is enough space.
Source code in backend/archiver/flows/archive_datasets_flow.py
234 235 236 237 238 239 240 |
|
copy_datablock_from_LTS(dataset_id, datablock)
Prefect task to move a datablock (.tar.gz file) to the LTS. Concurrency of this task is limited to 2 instances at the same time.
Source code in backend/archiver/flows/archive_datasets_flow.py
256 257 258 259 260 261 262 263 264 |
|
create_datablocks_flow(dataset_id)
Prefect (sub-)flow to create datablocks (.tar.gz files) for files of a dataset and register them in Scicat.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
str
|
Dataset id |
required |
Returns:
Type | Description |
---|---|
List[DataBlock]
|
List[DataBlock]: List of created and registered datablocks |
Source code in backend/archiver/flows/archive_datasets_flow.py
336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 |
|
move_data_to_LTS(dataset_id, datablock)
Prefect task to move a datablock (.tar.gz file) to the LTS. Concurrency of this task is limited to 2 instances at the same time.
Source code in backend/archiver/flows/archive_datasets_flow.py
248 249 250 251 252 253 |
|
move_datablocks_to_lts_flow(dataset_id, datablocks)
Prefect (sub-)flow to move a datablock to the LTS. Implements the copying of data and verification via checksum.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
str
|
description |
required |
datablock
|
DataBlock
|
description |
required |
Source code in backend/archiver/flows/archive_datasets_flow.py
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 |
|
on_get_origdatablocks_error(dataset_id, task, task_run, state)
Callback for get_origdatablocks tasks. Reports a user error.
Source code in backend/archiver/flows/archive_datasets_flow.py
50 51 52 53 |
|
sleep_for(time_in_seconds)
Sleeps for a given amount of time. Required to wait for the LTS to update its internal state. Needs to be blocking as it should prevent the following task to run.
Source code in backend/archiver/flows/archive_datasets_flow.py
226 227 228 229 230 231 |
|
verify_datablock_in_verification(dataset_id, datablock)
Prefect Task to verify a datablock in the LTS against a checksum. Task of this type run with no concurrency since the LTS does only allow limited concurrent access.
Source code in backend/archiver/flows/archive_datasets_flow.py
276 277 278 279 280 281 |
|