Skip to main content

Managing Jobs

Choose language for code snippet

Python Php Go

In this section we describe the concept of tasks in Expedition, how to create them, monitor them and cancel them.


Some of the background tasks in Expedition 2.0 are better performed within jobs that can be monitored, are non-blocking and in some cases, can be cancelled. Importing a configuration is one of such actions as, depending on the configuration size, the required time for importing may be excessive for a blocking code.

In general terms, jobs are meant to be handled by an Agent in Expedition that should be started in advance. See Managing Expedition's Agent section for more information.
In a general view, jobs are created in Expedition depending on the type of request that it is being submitted. When a request has potential to require a certain amount of time to be completed, this is executed as a job by the Expedition agent.

Example of a Job Creation#

The process of importing a configuration is an example of a task that is executed by the Agent as a job. All tasks requests that become executed within jobs will answer back with a job_id that can later be monitored or cancelled (note: cancelling may not be allowed for all type of tasks).

print('IMPORT CONFIGURATION')data  = {'resource':convertedResource}r ='https://'+ip+'/api/v1/project/'+projectId+'/import',         data=data,         verify=False,         headers=hed)response=r.json()jobId = json.dumps(response['Contents']['response']['data']['content']['system']['jobs']['jobId'])print(path+'\n')

The following JSON exemplifies the response that a request will generate when a job is created. Notice that the response includes the jobId (1619 in this case) and a brief description on the job type and its status. The jobId can be collected through the JSON path Contents->response->data->content->system->jobs->jobId

{    "Type": "success",    "success": true,    "Contents-Format": "json",    "Contents": {        "code": 0,        "success": true,        "cacheable": false,        "metadata": {},        "response": {            "total": 1,            "current-page": 1,            "per-page": 10,            "total-pages": 1,            "state": 0,            "response-messages": {                "total": 0,                "code": 0,                "messages": []            },            "data": {                "total-objects": null,                "fields": null,                "columns": null,                "content": {                    "system": {                        "jobs": {                            "jobId": 1619,                            "description": "Importing Config into Project",                            "state": 1,                            "JobMsg": {                                "statusCode": 1,                                "statusMessage": "Completed"                            },                            "TasksMsg": [                                "Completed. Filter executed successfully"                            ],                            "nextStep": []                        }                    }                }            }        }    }}

Checking Job status#

Within an answer of a job status, we can explore the current state of its execution. The state field will provide a numeric value that represents the execution percentage (from 0 - 0% to 1 - 100%).

Once the execution is completed, having the state to 1, it would be the moment to request the task response.

The calls comply with the following structure:

GEThttps://<YourExpeditionIP>/api/v1/job/status/<jobId>in url
jobId : job id value
Examplehttps:// url
jobId: 1619
jobFinished= Falseprint('CHECK IMPORT FINISHED')r = requests.get('https://'+ip+'/api/v1/job/status/'+jobId, verify=False, headers=hed)response=r.json()jobState = json.dumps(response['Contents']['response']['data']['content']['msg']['state'])percentage = float(jobState)*100print('Importing configuration: '+ str(round(percentage,2))+ '%')
#Wait until all content is retrieved from devicewhile(jobState != '1'):    sleep(0.5)    r = requests.get('https://'+ip+'/api/v1/job/status/'+jobId, verify=False,headers=hed)    response = r.json()    jobState = json.dumps(response['Contents']['response']['data']['content']['system']['jobs']['state'])    percentage = float(jobState)*100    print('Importing configuration: '+ str(round(percentage,2))+ '%')

Cancelling a Job#

Certain jobs can be cancelled on demand if no side consequences would impact on the current project. To request the cancellation of a job, we need to call the folloging URL with the DELETE method with the jobId that wants to be cancelled.

If a job gets cancelled, its state will be modified to -1. If a job cannot be cancelled, its execution will continue uninfluenced, its state will keep reflecting the current execition percentage and a message will inform that the current job cannot be cancelled.

DELETEhttps://<YourExpeditionIP>/api/v1/job/<jobId>in url
jobId : job id value
Examplehttps:// url
jobId: 1619

The calls comply with the following structure:

print('CANCELLING A JOB')r = requests.delete('https://'+ip+'/api/v1/job/'+jobId, verify=False, headers=hed)response=r.json()cancelled = json.dumpsresponse['Contents']['response']['data']['content']['system']['jobs']['state'])if(cancelled == -1)     print('Job has been cancelled')

Getting the Job result#

Once a job is completed, it is then possible to continue with the next steps in the user's workflow. In some cases, the job can suggest a next step that it is meaningful given the current completed task. For instance, after a configuration has been migrated, a next step would recommend downloading the resulting PANOS configuration.

This information can be found in the nextStep field.