juicesync
Juicesync is a tool to move your data in object storage between any clouds or regions, also support local file, sftp and HDFS.
How it works?
Juicesync will scan all the keys from two object stores, and comparing them in ascending order to find out missing or outdated keys, then download them from the source and upload them to the destination in parallel.
Install
With Homebrew
brew install juicedata/tap/juicesync
Download binary release
From here
Develop
We use go mod to manage modules, if not sure how to use this, refer to The official document.
Upgrade
- Use Homebrew to upgrade or
- Download a new version from release page
Usage
$ juicesync -h
NAME:
juicesync - rsync for cloud storage
USAGE:
juicesync [options] SRC DST
SRC and DST should be [NAME://][ACCESS_KEY:SECRET_KEY@]BUCKET[.ENDPOINT][/PREFIX]
VERSION:
v0.5.0-1-gce9968c
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--start KEY, -s KEY the first KEY to sync
--end KEY, -e KEY the last KEY to sync
--threads value, -p value number of concurrent threads (default: 10)
--http-port PORT HTTP PORT to listen to (default: 6070)
--update, -u update existing file if the source is newer (default: false)
--force-update, -f always update existing file (default: false)
--perms preserve permissions (default: false)
--dirs Sync directories or holders (default: false)
--dry don't copy file (default: false)
--delete-src, --deleteSrc delete objects from source after synced (default: false)
--delete-dst, --deleteDst delete extraneous objects from destination (default: false)
--exclude PATTERN exclude keys containing PATTERN (POSIX regular expressions)
--include PATTERN only include keys containing PATTERN (POSIX regular expressions)
--manager value manager address
--worker value hosts (seperated by comma) to launch worker
--verbose, -v turn on debug log (default: false)
--quiet, -q change log level to ERROR (default: false)
--help, -h show help (default: false)
--version, -V print only the version (default: false)
SRC and DST must be an URI of the following object storage:
- file: local files
- sftp: FTP via SSH
- s3: Amazon S3
- hdfs: Hadoop File System (HDFS)
- gcs: Google Cloud Storage
- wasb: Windows Azure Blob Storage
- oss: Aliyun OSS
- cos: Tencent Cloud COS
- ks3: KSYun KS3
- ufile: UCloud UFile
- qingstor: Qingcloud QingStor
- bos: Baidu Cloud Object Storage
- jss: JCloud Object Storage
- qiniu: Qiniu
- b2: Backblaze B2
- space: Digital Ocean Space
- obs: Huawei Object Storage Service
SRC and DST should be in the following format:
[NAME://][ACCESS_KEY:SECRET_KEY@]BUCKET[.ENDPOINT][/PREFIX]
Some examples:
- local/path
- user@host:path
- file:///Users/me/code/
- hdfs://hdfs@namenode1:9000,namenode2:9000/user/
- s3://my-bucket/
- s3://access-key:secret-key-id@my-bucket/prefix
- gcs://my-bucket.us-west1.googleapi.com/
- oss://test
- cos://test-1234
- obs://my-bucket
- bos://my-bucket
Note:
- It's recommended to run juicesync in the target region to have better performance.
- Auto discover endpoint for bucket of S3, OSS, COS, OBS, BOS,
SRC
andDST
can use formatNAME://[ACCESS_KEY:SECRET_KEY@]BUCKET[/PREFIX]
.ACCESS_KEY
andSECRET_KEY
can be provided by corresponding environment variables (see below). - S3:
- The access key and secret key for S3 could be provided by
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
, or IAM role.
- The access key and secret key for S3 could be provided by
- COS:
- The AppID should be part of the bucket name.
- The credential can be provided by environment variable
COS_SECRETID
andCOS_SECRETKEY
.
- GCS: The machine should be authorized to access Google Cloud Storage.
- OSS:
- The credential can be provided by environment variable
ALICLOUD_ACCESS_KEY_ID
andALICLOUD_ACCESS_KEY_SECRET
, RAM role, EMR MetaService.
- The credential can be provided by environment variable
- OBS:
- The credential can be provided by environment variable
HWCLOUD_ACCESS_KEY
andHWCLOUD_SECRET_KEY
.
- The credential can be provided by environment variable
- BOS:
- The credential can be provided by environment variable
BDCLOUD_ACCESS_KEY
andBDCLOUD_SECRET_KEY
.
- The credential can be provided by environment variable
- Qiniu: The S3 endpoint should be used for Qiniu, for example, abc.cn-north-1-s3.qiniu.com. If there are keys starting with "/", the domain should be provided as
QINIU_DOMAIN
. - sftp: if your target machine uses SSH certificates instead of password, you should pass the path to your private key file to the environment variable
SSH_PRIVATE_KEY_PATH
, likeSSH_PRIVATE_KEY_PATH=/home/someuser/.ssh/id_rsa juicesync [src] [dst]
.
from Hacker News https://ift.tt/2QNUdNe
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.