Workflow Filesystem
Your workflow runs inside a managed desktop. Any files you write to the workflow’s working directory are archived into assets.zip when the run completes.
from pathlib import Path
from pydantic import BaseModel
class Params(BaseModel):
...
class Result(BaseModel):
...
def run(params: Params) -> Result:
# Create output directories
Path("./reports").mkdir(parents=True, exist_ok=True)
# Write files — these end up in assets.zip
Path("./reports/summary.txt").write_text("Report content")
Drive Access
Use Computer.drive() to access files from the host filesystem (outside the workflow’s working directory).
computer.drive(path: str) -> Drive
Drive access reads files from the host filesystem outside the workflow’s working directory. The working directory at /assets is a tmpfs — files written there are archived to assets.zip when the run completes.
Mount a directory
from nen import Computer
def run(params: Params) -> Result:
computer = Computer()
downloads = computer.drive("~/Downloads")
mount = computer.drive("/mnt/tmp")
List files
drive.files(pattern: str = "*") -> list[File]
def run(params: Params) -> Result:
computer = Computer()
downloads = computer.drive("~/Downloads")
for f in downloads.files():
print(f.name, f.size)
def run(params: Params) -> Result:
computer = Computer()
downloads = computer.drive("~/Downloads")
# Filter by glob pattern
for f in downloads.files("*.pdf"):
print(f.name)
Copy files into the workflow directory
from pathlib import Path
def run(params: Params) -> Result:
computer = Computer()
downloads = computer.drive("~/Downloads")
Path("./output").mkdir(parents=True, exist_ok=True)
for f in downloads.files("*.pdf"):
(Path("./output") / f.name).write_bytes(f.read_bytes())
Stream large files
For large files, stream instead of reading into memory:
def run(params: Params) -> Result:
computer = Computer()
downloads = computer.drive("~/Downloads")
for f in downloads.files("*.csv"):
with open(f.name, 'w') as target:
target.write(f.read_text())
File Object
| Property | Type | Description |
|---|
name | str | Filename |
size | int | Size in bytes |
modified | datetime | Last modified timestamp |
read_bytes()
file.read_bytes() -> bytes
Read file content as bytes. Use this when working with binary files (PDFs, images, etc.). For text-based formats like CSV, use read_text() instead.
# Copy binary file into the workflow directory
(Path("./output") / f.name).write_bytes(f.read_bytes())
read_text()
file.read_text(encoding: str = "utf-8") -> str
Read file content as a decoded string. Use this when working with plain text files such as CSVs, logs, or configuration files.
The encoding parameter selects the codec used to decode the raw file bytes into a string. It defaults to "utf-8" and accepts any encoding name recognised by Python’s codecs module — common examples include "utf-16" and "utf-16". If the file contains bytes that are invalid for the specified encoding, a UnicodeDecodeError is raised.
# Read a CSV file with default UTF-8 encoding
content = f.read_text()
# Read a file using a different encoding
content = f.read_text(encoding="utf-16")
Assets in Webhook Response
When the run completes, any files in the working directory are archived and delivered with the webhook:
{
"success": true,
"result": { "customer_id": "ABC123" },
"assets": "https://s3.../assets.zip"
}
Organize output files into subdirectories for clarity. The full directory structure is preserved in the zip archive.
Common Pattern: Download and Save
from pathlib import Path
from nen import Agent, Computer
def run(params: Params) -> Result:
agent = Agent()
computer = Computer()
# Trigger a download in the application
agent.execute("Click the Export CSV button")
agent.verify("Has the file finished downloading?", timeout=30)
# Copy from Downloads to the workflow directory
downloads = computer.drive("~/Downloads")
Path("./exports").mkdir(parents=True, exist_ok=True)
for f in downloads.files("*.csv"):
(Path("./exports") / f.name).write_text(f.read_text())