Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Contact Us
  • Home
  • API, CLI, and SDK Documentation
  • REST API Documentation
  • Jobs Using REST API
  • Job Behaviors (REST API)

Parallel Writes Per Job Using REST API

Learn how to manage parallel writes for migration jobs using the DryvIQ REST API.

Written by Andrea Harvey

Updated at May 2nd, 2025

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Insights
    Prebuilt Insights Custom Insights
  • Content
  • Accounts
  • Activity Log
  • Content Scans
  • Migrations
    Migration Jobs Migration Reports Maps Flagged Items Migration Tools
  • Connections
    Supported Platform Connections Creating Connections Connection Maintenance Connection Pools
  • Entity Types
    DryvIQ Available Entity Types Custom Entity Types Entity Type Maintenance
  • Action Sets
    Creating Action Sets Action Sets Maintenance
  • Settings
    License Performance Notifications Extensions Entity Types Settings Display Settings Configuration
  • API, CLI, and SDK Documentation
    REST API Documentation Command-line Interface SDK Development
  • POC Offering
  • Release Notes
+ More

Table of Contents

Overview Parallel Writes and Memory Usage Addressing Memory Issues Default Parallel Write Settings Set Parallel Writes for a Job Example Update Parallel Writes on an Existing Job

Overview

Parallel writes is a configurable feature that relates to the number of web service requests across an instance of DryvIQ on a given node, which will operate in parallel. However, it is essential to note that increasing the number of parallel writes does not always result in faster or better performance. There is a long list of concepts that have to be taken into account. Consult your Consultative Services representative or Customer Support for assistance understanding hw this setting can impact your jobs and configuration. Also, if you are running DryvIQ in an auto-scale environment, this does not apply as scaling will occur as configured.

Currently, you can only use the REST API to set the parallel writes for an individual job. Global parallel writes can be set in the Performance Settings.

 

Parallel Writes and Memory Usage

A job in DryvIQ does not use a fixed amount of memory. Memory usage for individual jobs will vary based on several factors, the most significant being the number of files and how they are distributed (all in one folder, spread across subfolders, etc.). To avoid excessive memory usage related to how content is distributed, DryvIQ recommends preserving the system default for Directory Item Limits. The main factors affecting memory usage for a DryvIQ node are the number of concurrent jobs, the Parallel Writes Per Job for each job, and the memory impact of the specific jobs.

Addressing Memory Issues

If memory issues occur due to increasing the Directory Item Limit or Parallel Writes Per Job, there is no other mitigation other than reducing the number of current jobs or breaking up the source content into multiple jobs. DryvIQ will continue using memory until it runs out (it will not limit itself), and it will eventually reach the environment's maximum capacity. Reaching the environment maximum may result in a non-graceful termination of DryvIQ, which could cause jobs to retransfer files, permissions, or metadata. In the case of larger jobs being stopped in this manner, they will enter recovery mode, continue to use all available memory, stop again, and repeat the process in a loop, resulting in a loss of throughput.

Default Parallel Write Settings

The default parallel write value is 4, 8, or 12, depending on the number of the CPU logical processors on the machine running the DryvIQ service.

  • If the CPU Logical Processors is 2, the default parallel writes value is 4.
  • If the CPU Logical Processors is 8, the default parallel writes value is 8.
  • If the CPU Logical Processors is 32, the default parallel writes value is 12.

Set Parallel Writes for a Job

Add the following in the transfer block of your job.

{
 "performance": {
       "parallel_writes": {"requested": 4}
    }
}

Example

{
    "name": "Test Parallel Writes",
        "kind": "transfer",
        "transfer": {
            "audit_level": "trace",
            "transfer_type": "copy",
            "performance": {
                "parallel_writes": {"requested": 4}
},
    "source": {
        "connection": {
            "id": "{{cloud_connection}}"
},
    "target": {
        "path": "/MASTER_TESTS/BASIC TRANSFER TESTS"
    }
},
    "destination": {
        "connection": {
            "id": "{{cloud_connection}}"
},
    "target": {
        "path": "/SAP/LB/Test_ParallelWrites"
        }
    }
},
    "schedule": {
        "mode": "manual"
    }
}

To review your job to confirm your request for parallel writes, use the following call.

GET {{url}}v1/jobs?include=all

Update Parallel Writes on an Existing Job

The following body in a PATCH request to {{url}}v1/jobs/{{job}} will update the parallel_writes value to 8.

{
   "kind": "transfer",
    "transfer": {
     "performance": {
           "parallel_writes": {"requested": 8}
     }
   }
} 

 

requests concurrent api rest api job migration behavior parallel memory web service

Was this article helpful?

Yes
No
Give feedback about this article

Related Articles

  • Connection Management Using REST API
  • Connection Pools Using REST API
  • Performance Counter Metrics Using REST API

Copyright 2025 – DryvIQ, Inc.

Knowledge Base Software powered by Helpjuice

Expand