When migrating or uploading documents to SharePoint using the Client-Side Object Model (CSOM), you may encounter this error:
The request message is too big. The server does not allow messages larger than 2,097,152 bytes.
This error is common in automation, Azure Functions, and custom migration tools that rely on CSO
📘 Understanding and Fixing “The request message is too big” in SharePoint CSOM Uploads
When migrating or uploading documents to SharePoint using the Client-Side Object Model (CSOM), you may encounter this error:
The request message is too big. The server does not allow messages larger than 2,097,152 bytes.
This error is common in automation, Azure Functions, and custom migration tools that rely on CSOM for file transfer.
Let’s understand why it happens, what it means, and how to fix it properly.
🧠 What the Error Means
This message indicates that SharePoint Online rejected a request because the payload exceeded the server’s per-request size limit.
It doesn’t mean the file itself is “too large” — SharePoint supports much larger files (up to 250 GB) —
but rather that the way your code sends it violates a low-level message limit.
By default, CSOM sends file content in a single HTTP POST when using:
FileCreationInformation newFile = new FileCreationInformation
{
Content = fileBytes, // entire file loaded in memory
Url = fileName,
Overwrite = true
};
folder.Files.Add(newFile);
ctx.ExecuteQuery();
If fileBytes is larger than roughly 2 MB, the HTTP request itself becomes too big.
SharePoint will immediately respond with HTTP 400 and this exact message.
⚙️ Why This Happens
SharePoint Online enforces several size-related restrictions at the transport level:
| Limit Type | Description | Default Threshold |
|---|---|---|
| Message size | Maximum payload per HTTP request (applies to CSOM) | ~2 MB |
| File upload limit | Maximum file size supported by SharePoint Online | 250 GB |
| Timeout limit | Maximum duration a request can execute | ~180 seconds |
So even if your file is well under 250 MB, the CSOM method you use might not support it if the request body is too large.
🧩 Example Scenario
Imagine a migration job copying a 5 MB Excel file:
09:30:27 - ❌ Error copying file '01_Test Point 1.xlsx':
The request message is too big. The server does not allow messages larger than 2097152 bytes.
This happens because the migration script tries to load the entire file content into memory (byte[]) and push it in a single request.
🔍 Root Cause Summary
| Layer | Component | Limitation |
|---|---|---|
| CSOM | FileCreationInformation.Content | Must be ≤ 2 MB |
| HTTP | POST message body | Limited by internal throttling |
| Azure Function or WPF | Default request serializer | Converts the whole array to a single binary stream |
🚧 Incorrect Approach
A typical (failing) upload looks like this:
var bytes = File.ReadAllBytes(localPath);
var info = new FileCreationInformation
{
Content = bytes, // ❌ Entire content in one go
Url = fileName,
Overwrite = true
};
targetFolder.Files.Add(info);
ctx.ExecuteQuery(); // Fails if >2 MB
✅ Correct Approach — Chunked Upload
The best way to upload files larger than 2 MB via CSOM is to use the chunked upload pattern,
which breaks the file into smaller binary blocks and commits them sequentially.
Here’s a simplified version of that approach:
public static void UploadFileInChunks(ClientContext ctx, string targetFolderUrl, string localFilePath)
{
const int chunkSize = 2 * 1024 * 1024; // 2 MB
var fileName = Path.GetFileName(localFilePath);
var fileInfo = new FileInfo(localFilePath);
using (FileStream fs = new FileStream(localFilePath, FileMode.Open, FileAccess.Read))
{
var folder = ctx.Web.GetFolderByServerRelativeUrl(targetFolderUrl);
ctx.Load(folder);
ctx.ExecuteQuery();
// Initialize the upload session
ClientResult<Guid> uploadId = ctx.ExecuteQueryAsyncResult();
Guid uniqueId = Guid.NewGuid();
// First chunk
byte[] buffer = new byte[chunkSize];
int bytesRead = fs.Read(buffer, 0, chunkSize);
ClientResult<File> uploadResult = File.SaveBinaryDirect(ctx, $"{targetFolderUrl}/{fileName}", fs, true);
ctx.ExecuteQuery();
// Continue with remaining chunks
while ((bytesRead = fs.Read(buffer, 0, chunkSize)) > 0)
{
using (MemoryStream ms = new MemoryStream(buffer, 0, bytesRead))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(ctx, $"{targetFolderUrl}/{fileName}", ms, true);
ctx.ExecuteQuery();
}
}
Console.WriteLine($"✅ File '{fileName}' uploaded in chunks successfully.");
}
}
This process:
- Divides the file into small pieces (typically 2 MB each);
- Sends each part separately via multiple requests;
- Reassembles them on the SharePoint side.
⚙️ Alternative Solutions
If you’re migrating large volumes of data or running into performance issues, consider one of these more scalable options:
| Option | Description | Pros | Cons |
|---|---|---|---|
| PnP Framework | Has built-in UploadFileAsync() supporting chunked uploads | Simple API | Adds dependency |
| Microsoft Graph API | Modern REST-based upload with resumable sessions | Ideal for cloud automation | Requires Azure AD App permissions |
| SharePoint Migration API | Designed for high-volume, background migrations | Handles TBs of data | Complex setup (requires Azure Storage) |
🧾 Best Practices for Large File Uploads
- Never load large files fully into memory.
Always stream them usingFileStreamorOpenBinaryStream(). - Respect the 2 MB message boundary in custom CSOM implementations.
- Add retry logic for transient errors or throttling (HTTP 429/503).
- Log progress and performance (e.g., file name, size, duration).
- Test with different environments — WPF, console, Azure Function — since memory and timeout behavior vary.
💡 How to Explain It to Stakeholders
The error means that SharePoint rejected the upload request because the file exceeded the 2 MB message size limit allowed per transaction in the API.
We’ll adjust the migration logic to send files in smaller binary chunks, which resolves this limitation and ensures that larger documents can be migrated successfully.
🏁 Conclusion
The “request message too big” error is not a storage limitation — it’s a transport-level constraint of the CSOM API.
By implementing chunked upload or using more modern frameworks like PnP or Graph, you can safely migrate any file size while maintaining full metadata fidelity and reliability.
