S3 (CEPH) upload/download test with s3cmd¶
Script:
eurac_test_upload_CEPH_s3_bucket.pyVersion: 0.1.0 Author: Ventura Bartolomeo Last updated: 2026-02-20
π§ Overview¶
This script performs an endβtoβend check against an S3βcompatible (CEPH) bucket using s3cmd:
- Verifies the presence of the
s3cmdconfiguration file. - Lists the bucket to validate credentials and connectivity.
- Uploads a small test file (
test_upload.txt). - (Demo) Works with the pseudoβfolder prefix
test_folder/and a sample PNG filename. - Downloads a test object.
- Deletes the test object from the bucket.
Note: S3 has no real directories; folder paths are key prefixes.
π§© Requirements¶
- Python: 3.8+ (standard library only:
os,sys). - External tool:
s3cmdinstalled and available inPATH. s3cmdconfig: valid.s3cfgwith credentials/endpoint for the CEPH cluster. Default path in the script:/home/$USER/.s3cfg(change it in the code if needed).- Network: reachability to the S3 endpoint and permissions for
ls,put,get,delon the target bucket.
Tip: use a limitedβscope access key for a dedicated test bucket.
π¦ Installation¶
# (Optional) create a virtual environment
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scriptsctivate
# Install s3cmd if missing
# Debian/Ubuntu
sudo apt-get update && sudo apt-get install -y s3cmd
# or via pip (only if allowed in your environment)
pip install s3cmd
π§ Configure s3cmd to work with CEPH Object Storage¶
s3cmd --configure
# creates ~/.s3cfg
Important: if your
.s3cfgis not at~/.s3cfg, updates3cmd_config_file_pathin the script.
Note: Please note you will need the following information to use the tool:
- access_key
- secret_key
- host_base
- host_bucket
These parameters will be generated and/or shared whenever necessary. For each partners a dedicated couple of access-key and secret-key has been generated. Please contact Bartolomeo Ventura to get them.
- Run s3cmd --configure,
- Specify your Access Key.
- Specify your Secret Key.
- Insert "EU" for Default Region/bucket location.
- Specify the S3 URL you need for S3 Endpoint (host_base) .
- Specify the S3 URL for DNS-style bucket+hostname:port template for accessing a bucket (host_bucket).
- Specify a password of your choice for Encryption password (Optional).
- Press return for Path to GPG program (Optional).
- Press return for Use HTTPS protocol (Optional).
- Press return for HTTP Proxy server name (Optional).
- Confirm by specifying y.
- Confirm again by specifying y.
βοΈ Constants (hardcoded in the script)¶
The script exposes no CLI parameters; it relies on internal constants:
s3cmd_config_file_path: config file path, default'/home/$USER/.s3cfg'(Change "$USER" with your user home folder name i.e. /home/bventura/.s3cfg.).bucket: hardcoded ass3://eo-adrop/inside the shell commands.test_file_path: locally generatedtest_upload.txtin the script directory.test_folder: key prefixtest_folder/.png_file:test.png(SAVE a .png file in the script folder for the tests).
For flexibility, see Suggested improvements below.
π Quick start¶
Run from the project directory:
python use_CEPH_s3_bucket.py
π₯ Download the python script¶
Use this python script to start connecting and interacting with CEPH S3 Object Storage.
What it does:
- Config check: ensures
s3cmd_config_file_pathexists. If missing β exit 1. - Connectivity check:
s3cmd ls s3://eo-adrop/. - Test upload: creates
test_upload.txtand uploads it tos3://eo-adrop/test_upload.txt. - Bucket listing:
s3cmd ls s3://eo-adrop/to verify presence. - Prefix ops (
test_folder/): demos commands ontest_folder/and on{{test_folder}}{{png_file}}. - Download object:
s3cmd get s3://eo-adrop/{{test_folder}}{{png_file}}into the script directory. - Delete object:
s3cmd del s3://eo-adrop/{{test_folder}}{{png_file}}.
The script exits with code 1 on connection/permission failures or unsuccessful commands.
π Sample console output¶
s3cmd config file path: /home/USER/.s3cfg
Checking connection to S3 bucket with command: s3cmd -c /home/USER/.s3cfg ls s3://eo-adrop/
Connection to S3 bucket successful.
...
Test file uploaded successfully to S3 bucket.
...
Object downloaded successfully from S3 bucket. get_object_result: 0
...
Test file deleted successfully from S3 bucket. delete_file_result: 0
π€ Objects created in the bucket¶
s3://eo-adrop/test_upload.txt- (During the prefix demo)
s3://eo-adrop/test_folder/test.png
Remember to clean up test objects in shared environments.
ποΈ Minimal project layout¶
<root>
ββ use_CEPH_s3_bucket.py
ββ test.png # png file for th demo
ββ test_upload.txt # generated at runtime
β Troubleshooting¶
| Symptom | Likely cause | Fix |
|---|---|---|
Error: s3cmd config file not found |
Wrong s3cmd_config_file_path |
Update the path or create ~/.s3cfg and edit the script |
Unable to connect to S3 bucket |
Invalid endpoint/credentials, blocked network | Check .s3cfg, proxy/firewall, and bucket ACL/policy |
put/get/del failures |
Insufficient permissions on bucket/object | Update policy or use a key with required permissions |
| Download fails because "file already exists" | Script exits if destination file already exists | Remove the local file before reβrunning |
π Security¶
- Do not commit
.s3cfgor credentials to VCS. Use environment variables or a secret manager. - Scope the S3 access key to the minimum required permissions and a test bucket.
β‘ Performance & notes¶
- Synchronous CLI via
os.system. For larger batches consider a Python SDK (e.g.,boto3) and parallelization. - Prefix operations do not create real folders; always verify keys with
s3cmd ls s3://bucket/prefix.
π‘ Suggested improvements¶
- CLI parameters via
argparse(--config,--bucket,--prefix,--file). - Error handling with
subprocess.run(..., check=True)to capturestdout/stderrand return codes. - Environment variables instead of hardcoded paths.
- Resource cleanup with
try/finallyblocks. - Dryβrun option (
s3cmd --dry-run) for safer testing.
π License¶
ποΈ Changelog¶
- 0.1.0 β Initial generated documentation.