1
0
mirror of https://github.com/ohmyzsh/ohmyzsh.git synced 2026-01-05 20:57:46 +08:00

Compare commits

...

5 Commits

Author SHA1 Message Date
Fabian Bonk
d81b4ac9f2
git: run gfa with --jobs=10 (fetch remotes in parallel) (#9268)
Co-authored-by: Marc Cornellà <marc.cornella@live.com>
2020-10-03 20:29:26 +02:00
Marc Cornellà
89278c71b2 bundler: refactor bundler plugin and clean up gem wrappers 2020-10-03 18:41:42 +02:00
Sandip Subedi
e09aac3751
bundler: format aliases table and clean up README (#9300)
Co-authored-by: Marc Cornellà <marc.cornella@live.com>
2020-10-03 18:38:44 +02:00
Angel Ramboi
7fce07a50c
Add completion plugin for IPFS (InterPlanetary File System) (#4737) 2020-10-03 11:49:42 +02:00
Amir Masoud Abdol
d5dc9f7153
Add sublime-merge plugin (#7228) 2020-10-03 11:47:18 +02:00
8 changed files with 947 additions and 86 deletions

View File

@ -1,34 +1,48 @@
# Bundler
- Adds completion for basic bundler commands
This plugin adds completion for basic bundler commands, as well as aliases and helper functions for
an easier experience with bundler.
- Adds short aliases for common bundler commands
- `ba` aliased to `bundle add`
- `be` aliased to `bundle exec`.
It also supports aliases (if `rs` is `rails server`, `be rs` will bundle-exec `rails server`).
- `bl` aliased to `bundle list`
- `bp` aliased to `bundle package`
- `bo` aliased to `bundle open`
- `bout` aliased to `bundle outdated`
- `bu` aliased to `bundle update`
- `bi` aliased to `bundle install --jobs=<cpu core count>` (only for bundler `>= 1.4.0`)
- `bcn` aliased to `bundle clean`
- `bck` aliased to `bundle check`
To use it, add `bundler` to the plugins array in your zshrc file:
- Adds a wrapper for common gems:
- Looks for a binstub under `./bin/` and executes it (if present)
- Calls `bundle exec <gem executable>` otherwise
```zsh
plugins=(... bundler)
```
## Aliases
| Alias | Command | Description |
|--------|--------------------------------------|------------------------------------------------------------------------------------------|
| `ba` | `bundle add` | Add gem to the Gemfile and run bundle install |
| `bck` | `bundle check` | Verifies if dependencies are satisfied by installed gems |
| `bcn` | `bundle clean` | Cleans up unused gems in your bundler directory |
| `be` | `bundle exec` | Execute a command in the context of the bundle |
| `bi` | `bundle install --jobs=<core_count>` | Install the dependencies specified in your Gemfile (using all cores in bundler >= 1.4.0) |
| `bl` | `bundle list` | List all the gems in the bundle |
| `bo` | `bundle open` | Opens the source directory for a gem in your bundle |
| `bout` | `bundle outdated` | List installed gems with newer versions available |
| `bp` | `bundle package` | Package your needed .gem files into your application |
| `bu` | `bundle update` | Update your gems to the latest available versions |
## Gem wrapper
The plugin adds a wrapper for common gems, which:
- Looks for a binstub under `./bin/` and executes it if present.
- Calls `bundle exec <gem>` otherwise.
Common gems wrapped by default (by name of the executable):
`annotate`, `cap`, `capify`, `cucumber`, `foodcritic`, `guard`, `hanami`, `irb`, `jekyll`, `kitchen`, `knife`, `middleman`, `nanoc`, `pry`, `puma`, `rackup`, `rainbows`, `rake`, `rspec`, `rubocop`, `shotgun`, `sidekiq`, `spec`, `spork`, `spring`, `strainer`, `tailor`, `taps`, `thin`, `thor`, `unicorn` and `unicorn_rails`.
## Configuration
### Settings
Please use the exact name of the executable and not the gem name.
You can add or remove gems from the list of wrapped commands.
Please **use the exact name of the executable** and not the gem name.
### Add additional gems to be wrapped
#### Include gems to be wrapped (`BUNDLED_COMMANDS`)
Add this before the plugin-list in your `.zshrc`:
Add this before the plugin list in your `.zshrc`:
```sh
BUNDLED_COMMANDS=(rubocop)
@ -37,10 +51,9 @@ plugins=(... bundler ...)
This will add the wrapper for the `rubocop` gem (i.e. the executable).
#### Exclude gems from being wrapped (`UNBUNDLED_COMMANDS`)
### Exclude gems from being wrapped
Add this before the plugin-list in your `.zshrc`:
Add this before the plugin list in your `.zshrc`:
```sh
UNBUNDLED_COMMANDS=(foreman spin)
@ -49,13 +62,13 @@ plugins=(... bundler ...)
This will exclude the `foreman` and `spin` gems (i.e. their executable) from being wrapped.
## Excluded gems
### Excluded gems
These gems should not be called with `bundle exec`. Please see [issue #2923](https://github.com/ohmyzsh/ohmyzsh/pull/2923) on GitHub for clarification.
These gems should not be called with `bundle exec`. Please see [issue #2923](https://github.com/ohmyzsh/ohmyzsh/pull/2923) on GitHub for clarification:
`berks`
`foreman`
`mailcatcher`
`rails`
`ruby`
`spin`
- `berks`
- `foreman`
- `mailcatcher`
- `rails`
- `ruby`
- `spin`

View File

@ -1,13 +1,49 @@
## Aliases
alias ba="bundle add"
alias bck="bundle check"
alias bcn="bundle clean"
alias be="bundle exec"
alias bi="bundle_install"
alias bl="bundle list"
alias bp="bundle package"
alias bo="bundle open"
alias bout="bundle outdated"
alias bp="bundle package"
alias bu="bundle update"
alias bi="bundle_install"
alias bcn="bundle clean"
alias bck="bundle check"
## Functions
bundle_install() {
# Bail out if bundler is not installed
if (( ! $+commands[bundle] )); then
echo "Bundler is not installed"
return 1
fi
# Bail out if not in a bundled project
if ! _within-bundled-project; then
echo "Can't 'bundle install' outside a bundled project"
return 1
fi
# Check the bundler version is at least 1.4.0
autoload -Uz is-at-least
local bundler_version=$(bundle version | cut -d' ' -f3)
if ! is-at-least 1.4.0 "$bundler_version"; then
bundle install "$@"
return $?
fi
# If bundler is at least 1.4.0, use all the CPU cores to bundle install
if [[ "$OSTYPE" = (darwin|freebsd)* ]]; then
local cores_num="$(sysctl -n hw.ncpu)"
else
local cores_num="$(nproc)"
fi
bundle install --jobs="$cores_num" "$@"
}
## Gem wrapper
bundled_commands=(
annotate
@ -54,65 +90,41 @@ for cmd in $BUNDLED_COMMANDS; do
bundled_commands+=($cmd);
done
## Functions
bundle_install() {
if ! _bundler-installed; then
echo "Bundler is not installed"
elif ! _within-bundled-project; then
echo "Can't 'bundle install' outside a bundled project"
else
local bundler_version=`bundle version | cut -d' ' -f3`
if [[ $bundler_version > '1.4.0' || $bundler_version = '1.4.0' ]]; then
if [[ "$OSTYPE" = (darwin|freebsd)* ]]
then
local cores_num="$(sysctl -n hw.ncpu)"
else
local cores_num="$(nproc)"
fi
bundle install --jobs=$cores_num $@
else
bundle install $@
fi
fi
}
_bundler-installed() {
which bundle > /dev/null 2>&1
}
# Check if in the root or a subdirectory of a bundled project
_within-bundled-project() {
local check_dir="$PWD"
while [ "$check_dir" != "/" ]; do
[ -f "$check_dir/Gemfile" -o -f "$check_dir/gems.rb" ] && return
check_dir="$(dirname $check_dir)"
while [[ "$check_dir" != "/" ]]; do
if [[ -f "$check_dir/Gemfile" || -f "$check_dir/gems.rb" ]]; then
return 0
fi
check_dir="${check_dir:h}"
done
false
}
_binstubbed() {
[ -f "./bin/${1}" ]
return 1
}
_run-with-bundler() {
if _bundler-installed && _within-bundled-project; then
if _binstubbed $1; then
./bin/${^^@}
else
bundle exec $@
fi
if (( ! $+commands[bundle] )) || ! _within-bundled-project; then
"$@"
return $?
fi
if [[ -f "./bin/${1}" ]]; then
./bin/${^^@}
else
$@
bundle exec "$@"
fi
}
## Main program
for cmd in $bundled_commands; do
eval "function unbundled_$cmd () { $cmd \$@ }"
eval "function bundled_$cmd () { _run-with-bundler $cmd \$@}"
alias $cmd=bundled_$cmd
# Create wrappers for bundled and unbundled execution
eval "function unbundled_$cmd () { \"$cmd\" \"\$@\"; }"
eval "function bundled_$cmd () { _run-with-bundler \"$cmd\" \"\$@\"; }"
alias "$cmd"="bundled_$cmd"
if which _$cmd > /dev/null 2>&1; then
compdef _$cmd bundled_$cmd=$cmd
# Bind completion function to wrapped gem if available
if (( $+functions[_$cmd] )); then
compdef "_$cmd" "bundled_$cmd"="$cmd"
fi
done
unset cmd bundled_commands

View File

@ -1,3 +1,7 @@
# Git version checking
autoload -Uz is-at-least
git_version="${(As: :)$(git version 2>/dev/null)[3]}"
#
# Functions
#
@ -104,7 +108,10 @@ function gdv() { git diff -w "$@" | view - }
compdef _git gdv=git-diff
alias gf='git fetch'
alias gfa='git fetch --all --prune'
# --jobs=<n> was added in git 2.8
is-at-least 2.8 "$git_version" \
&& alias gfa='git fetch --all --prune --jobs=10' \
|| alias gfa='git fetch --all --prune'
alias gfo='git fetch origin'
alias gfg='git ls-files | grep'
@ -240,8 +247,7 @@ alias gss='git status -s'
alias gst='git status'
# use the default stash push on git 2.13 and newer
autoload -Uz is-at-least
is-at-least 2.13 "$(git --version 2>/dev/null | awk '{print $3}')" \
is-at-least 2.13 "$git_version" \
&& alias gsta='git stash push' \
|| alias gsta='git stash save'
@ -291,3 +297,5 @@ function grename() {
git push --set-upstream origin "$2"
fi
}
unset git_version

22
plugins/ipfs/LICENSE Normal file
View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Angel Ramboi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

17
plugins/ipfs/README.md Normal file
View File

@ -0,0 +1,17 @@
# zsh-ipfs
zsh completion plugin for [The InterPlanetary File System (IPFS)][1]
Please submit issues and pull requests to the [main zsh-ipfs repo][2].
### About
[IPFS (InterPlanetary File System)][1] is a peer-to-peer hypermedia protocol
designed to make the web faster, safer, and more open.
### License
See: https://github.com/hellounicorn/zsh-ipfs/blob/master/LICENSE
[1]: http://ipfs.io/
[2]: https://github.com/hellounicorn/zsh-ipfs

717
plugins/ipfs/_ipfs Normal file
View File

@ -0,0 +1,717 @@
#compdef ipfs
#autoload
local -a _1st_arguments
_1st_arguments=(
'add:Add a file or directory to ipfs.'
'bitswap:Interact with the bitswap agent.'
'block:Interact with raw IPFS blocks.'
'bootstrap:Show or edit the list of bootstrap peers.'
'cat:Show IPFS object data.'
'cid:Convert and discover properties of CIDs'
'commands:List all available commands.'
'config:Get and set ipfs config values.'
'daemon:Run a network-connected IPFS node.'
'dag:Interact with ipld dag objects. (experimental)'
'dht:Issue commands directly through the DHT.'
'diag:Generate diagnostic reports.'
'dns:Resolve DNS links.'
'files:Interact with unixfs files.'
'filestore:Interact with filestore objects. (experimental)'
'get:Download IPFS objects.'
'id:Show ipfs node id info.'
'init:Initializes ipfs config file.'
'key:Create and list IPNS name keypairs.'
'log:Interact with the daemon log output.'
'ls:List directory contents for Unix filesystem objects.'
'mount:Mounts IPFS to the filesystem (read-only).'
'name:Publish and resolve IPNS names.'
'object:Interact with IPFS objects.'
'p2p:Libp2p stream mounting.'
'pin:Pin (and unpin) objects to local storage.'
'ping:Send echo request packets to IPFS hosts.'
'refs:List links (references) from an object.'
'repo:Manipulate the IPFS repo.'
'resolve:Resolve the value of names to IPFS.'
'stats:Query IPFS statistics.'
'swarm:Interact with the swarm.'
'tar:Utility functions for tar files in ipfs.'
'update:Download and apply go-ipfs updates'
'version:Show ipfs version information.'
)
_ipfs_subcommand(){
local curcontext="$curcontext" state line
typeset -A opt_args
_arguments -C ':command:->command' '*::options:->options'
case $state in
(command)
_describe -t commands "ipfs subcommand" $1
return
;;
(options)
case $line[1] in
(wantlist)
case $MAIN_SUBCOMMAND in
(bitswap)
_arguments '(-p --peer)'{-p,--peer}'[Specify which peer to show wantlist for. Default: self.]'
;;
esac
;;
(add)
case $MAIN_SUBCOMMAND in
(pin)
_arguments \
'(-r --recursive)'{-r,--recursive}'[Recursively pin the object linked to by the specified object(s). Default: true.]' \
'--progress[Show progress.]'
;;
(bootstrap)
local -a _bootstrap_rm_arguments
_bootstrap_rm_arguments=(
'default:Add default peers to the bootstrap list.'
)
_ipfs_subcommand _bootstrap_rm_arguments
;;
esac
;;
(rm)
case $MAIN_SUBCOMMAND in
(pin)
_arguments '(-r --recursive)'{-r,--recursive}'[Recursively unpin the object linked to by the specified object(s). Default: true.]'
;;
(bootstrap)
local -a _bootstrap_rm_arguments
_bootstrap_rm_arguments=(
'all:Remove all peers from the bootstrap list.'
)
_ipfs_subcommand _bootstrap_rm_arguments
;;
esac
;;
(ls)
case $MAIN_SUBCOMMAND in
(pin)
_arguments \
'(-t --type)'{-t,--type}'[The type of pinned keys to list. Can be "direct", "indirect", "recursive", or "all". Default: all.]' \
'(-q --quiet)'{-q,--quiet}'[Write just hashes of objects.]'
;;
(p2p)
_arguments '(-v --headers)'{-v,--headers}'[Print table headers (Protocol, Listen, Target).]'
;;
esac
;;
(update)
case $MAIN_SUBCOMMAND in
(pin)
_arguments '--unpin[Remove the old pin. Default: true.]'
;;
esac
;;
(verify)
case $MAIN_SUBCOMMAND in
(pin)
_arguments \
'--verbose[Also write the hashes of non-broken pins.]' \
'(-q --quiet)'{-q,--quiet}'[Write just hashes of broken pins.]'
;;
esac
;;
(get|query|findpeer)
case $MAIN_SUBCOMMAND in
(dht)
_arguments '(-v --verbose)'{-v,--verbose}'[Print extra information.]'
;;
(object)
_arguments '--data-encoding[Encoding type of the data field, either "text" or "base64". Default: text.]'
;;
esac
;;
(put)
case $MAIN_SUBCOMMAND in
(dht)
_arguments '(-v --verbose)'{-v,--verbose}'[Print extra information.]'
;;
(object)
_arguments \
'--inputenc[Encoding type of input data. One of: {"protobuf", "json"}. Default: json.]' \
'--datafieldenc[Encoding type of the data field, either "text" or "base64". Default: text.]' \
'--pin[Pin this object when adding.]' \
'(-q --quiet)'{-q,--quiet}'[Write minimal output]'
;;
esac
;;
(findprovs)
case $MAIN_SUBCOMMAND in
(dht)
_arguments \
'(-v --verbose)'{-v,--verbose}'[Print extra information.]' \
'(-n --num-providers)'{-n,--num-providers}'[The number of providers to find. Default: 20.]'
;;
esac
;;
(provide)
case $MAIN_SUBCOMMAND in
(dht)
_arguments \
'(-v --verbose)'{-v,--verbose}'[Print extra information.]' \
'(-r --recursive)'{-r,--recursive}'[Recursively provide entire graph.]'
;;
esac
;;
(cmds|diff)
case $MAIN_SUBCOMMAND in
(diag|object)
_arguments '(-v --verbose)'{-v,--verbose}'[Print extra information.]'
;;
esac
;;
(stat)
case $MAIN_SUBCOMMAND in
(object)
_arguments '--human[Print sizes in human readable format (e.g., 1K 234M 2G).]'
;;
(repo)
_arguments \
'--size-only[Only report RepoSize and StorageMax.]' \
'--human[Print sizes in human readable format (e.g., 1K 234M 2G).]'
;;
esac
;;
(publish)
case $MAIN_SUBCOMMAND in
(name)
_arguments \
'--resolve[Check if the given path can be resolved before publishing. Default: true.]' \
'(-t --lifetime)'{-t,--lifetime}'[Time duration that the record will be valid for. Default: 24h.]' \
'--allow-offline[When offline, save the IPNS record to the the local datastore without broadcasting to the network instead of simply failing.]' \
'--ttl[Time duration this record should be cached for. Uses the same syntax as the lifetime option. (caution: experimental).]' \
'(-k --key)'{-k,--key}"[Name of the key to be used or a valid PeerID, as listed by 'ipfs key list -l'. Default: self.]" \
'(-Q --quieter)'{-Q,--quieter}'[Write only final hash.]'
;;
esac
;;
(pubsub)
case $MAIN_SUBCOMMAND in
(name)
local -a _name_pubsub_arguments
_name_pubsub_arguments=(
'cancel:Cancel a name subscription'
'state:Query the state of IPNS pubsub'
'subs:Show current name subscriptions'
)
_ipfs_subcommand _name_pubsub_arguments
;;
esac
;;
(resolve)
case $MAIN_SUBCOMMAND in
(name)
_arguments \
'(-r --recursive)'{-r,--recursive}'[Resolve until the result is not an IPNS name. Default: true.]' \
'(-n --nocache)'{-n,--nocache}'[Do not use cached entries.]' \
'(--dhtrc --dht-record-count)'{--dhtrc,--dht-record-count}'[Number of records to request for DHT resolution.]' \
'(--dhtt --dht-timeout)'{--dhtt,--dht-timeout}'[Max time to collect values during DHT resolution eg "30s". Pass 0 for no timeout.]' \
'(-s --stream)'{-s,--stream}'[Stream entries as they are found.]'
;;
esac
;;
(patch)
case $MAIN_SUBCOMMAND in
(object)
local -a _object_patch_arguments
_object_patch_arguments=(
'add-link:Add a link to a given object.'
'append-data:Append data to the data segment of a dag node.'
'rm-link:Remove a link from a given object.'
'set-data:Set the data field of an IPFS object.'
)
_ipfs_subcommand _object_patch_arguments
;;
esac
;;
(gc)
case $MAIN_SUBCOMMAND in
(repo)
_arguments \
'--stream-errors[Stream errors.]' \
'(-q --quiet)'{-q,--quiet}'[Write minimal output.]'
;;
esac
;;
(bitswap)
case $MAIN_SUBCOMMAND in
(stats)
_arguments \
'(-v --verbose)'{-v,--verbose}'[Print extra information.]' \
'--human[Print sizes in human readable format (e.g., 1K 234M 2G).]'
;;
esac
;;
(bw)
case $MAIN_SUBCOMMAND in
(stats)
_arguments \
'(-p --peer)'{-p,--peer}'[Specify a peer to print bandwidth for.]' \
'(-t --proto)'{-t,--proto}'[Specify a protocol to print bandwidth for.]' \
'--poll[Print bandwidth at an interval.]' \
'(-i --interval)'{-i,--interval}'[Time interval to wait between updating output, if 'poll' is true.]'
;;
esac
;;
(repo)
case $MAIN_SUBCOMMAND in
(stats)
_arguments \
'--size-only[Only report RepoSize and StorageMax.]' \
'--human[Print sizes in human readable format (e.g., 1K 234M 2G).]'
;;
esac
;;
(bases)
case $MAIN_SUBCOMMAND in
(cid)
_arguments \
'--prefix[also include the single leter prefixes in addition to the code.]' \
'--numeric[also include numeric codes.]'
;;
esac
;;
(codecs|hashes)
case $MAIN_SUBCOMMAND in
(cid)
_arguments '--numeric[also include numeric codes.]'
;;
esac
;;
(format)
case $MAIN_SUBCOMMAND in
(cid)
_arguments \
'-f[Printf style format string. Default: %s.]' \
'-v[CID version to convert to.]' \
'-b[Multibase to display CID in.]'
;;
esac
;;
(close)
case $MAIN_SUBCOMMAND in
(p2p)
_arguments \
'(-a --all)'{-a,--all}'[Close all listeners.]' \
'(-p --protocol)'{-p,--protocol}'[Match protocol name.]' \
'(-l --listen-address)'{-l,--listen-address}'[Match listen address.]' \
'(-t --target-address)'{-t,--target-address}'[Match target address.]'
;;
esac
;;
(forward)
case $MAIN_SUBCOMMAND in
(p2p)
_arguments "--allow-custom-protocol[Don't require /x/ prefix.]"
;;
esac
;;
(listen)
case $MAIN_SUBCOMMAND in
(p2p)
_arguments \
"--allow-custom-protocol[Don't require /x/ prefix.]" \
'(-r --report-peer-id)'{-r,--report-peer-id}'[Send remote base58 peerid to target when a new connection is established.]'
;;
esac
;;
(stream)
case $MAIN_SUBCOMMAND in
(p2p)
local -a _p2p_stream_arguments
_p2p_stream_arguments=(
'close:Close active p2p stream.'
'ls:List active p2p streams.'
)
_ipfs_subcommand _p2p_stream_arguments
;;
esac
;;
(addrs)
case $MAIN_SUBCOMMAND in
(swarm)
local -a _swarm_addrs_arguments
_swarm_addrs_arguments=(
'listen:List interface listening addresses.'
'local:List local addresses.'
)
_ipfs_subcommand _swarm_addrs_arguments
;;
esac
;;
(filters)
case $MAIN_SUBCOMMAND in
(swarm)
local -a _swarm_filters_arguments
_swarm_filters_arguments=(
'add:Add an address filter.'
'rm:Remove an address filter.'
)
_ipfs_subcommand _swarm_filters_arguments
;;
esac
;;
(peers)
case $MAIN_SUBCOMMAND in
(swarm)
_arguments \
'(-v --verbose)'{-v,--verbose}'[display all extra information.]' \
'--streams[Also list information about open streams for each peer.]' \
'--latency[Also list information about latency to each peer.]' \
'--direction[Also list information about the direction of connection.]'
;;
esac
;;
esac
;;
esac
}
local expl
_arguments \
'(-c --config)'{-c,--config}'[Path to the configuration file to use.]' \
'(-D --debug)'{-D,--debug}'[Operate in debug mode.]' \
'(--help)--help[Show the full command help text.]' \
'(--h)-h[Show a short version of the command help text.]' \
'(-L --local)'{-L,--local}'[Run the command locally, instead of using the daemon. DEPRECATED: use --offline.]' \
'(--offline)--offline[Run the command offline.]' \
'(--api)--api[Use a specific API instance (defaults to /ip4/127.0.0.1/tcp/5001).]' \
'(--cid-base)--cid-base[Multibase encoding used for version 1 CIDs in output.]' \
'(--upgrade-cidv0-in-output)--upgrade-cidv0-in-output[Upgrade version 0 to version 1 CIDs in output.]' \
'(--enc --encoding)'{--enc,--encoding}'[The encoding type the output should be encoded with (json, xml, or text). Default: text.]' \
'(--stream-channels)--stream-channels[Stream channel output.]' \
'(--timeout)--timeout[Set a global timeout on the command.]' \
'*:: :->subcmds' && return 0
if (( CURRENT == 1 )); then
_describe -t commands "ipfs subcommand" _1st_arguments
return
fi
MAIN_SUBCOMMAND="$words[1]"
case $MAIN_SUBCOMMAND in
(add)
_arguments \
'(-r --recursive)'{-r,--recursive}'[Add directory paths recursively.]' \
'(--dereference-args)--dereference-args[Symlinks supplied in arguments are dereferenced.]' \
'(--stdin-name)--stdin-name[Assign a name if the file source is stdin.]' \
'(-H --hidden)'{-H,--hidden}'[Include files that are hidden. Only takes effect on recursive add.]' \
'(-q --quiet)'{-q,--quiet}'[Write minimal output.]' \
'(-Q --quieter)'{-Q,--quieter}'[Write only final hash.]' \
'(--silent)--silent[Write no output.]' \
'(-p --progress)'{-p,--progress}'[Stream progress data.]' \
'(-t --trickle)'{-t,--trickle}'[Use trickle-dag format for dag generation.]' \
'(-n --only-hash)'{-n,--only-hash}'[Only chunk and hash - do not write to disk.]' \
'(-w --wrap-with-directory)'{-w,--wrap-with-directory}'[Wrap files with a directory object.]' \
'(-s --chunker)'{-s,--chunker}'[Chunking algorithm, size-(bytes) or rabin-(min)-(avg)-(max). Default: size-262144.]' \
'(--pin)--pin[Pin this object when adding. Default: true.]' \
'(--raw-leaves)--raw-leaves[Use raw blocks for leaf nodes. (experimental).]' \
'(--nocopy)--nocopy[Add the file using filestore. Implies raw-leaves. (experimental).]' \
'(--fscache)--fscache[Check the filestore for pre-existing blocks. (experimental).]' \
'(--cid-version)--cid-version[CID version. Defaults to 0 unless an option that depends on CIDv1 is passed. (experimental).]' \
'(--hash)--hash[Hash function to use. Implies CIDv1 if not sha2-256. (experimental). Default: sha2-256.]' \
'(--inline)--inline[Inline small blocks into CIDs. (experimental).]' \
'(--inline-limit)--inline-limit[Maximum block size to inline. (experimental). Default: 32.]'
;;
(bitswap)
local -a _bitswap_arguments
_bitswap_arguments=(
'ledger:Show the current ledger for a peer.'
'reprovide:Trigger reprovider.'
'stat:Show some diagnostic information on the bitswap agent.'
'wantlist:Show blocks currently on the wantlist.'
)
_ipfs_subcommand _bitswap_arguments
;;
(block)
local -a _block_arguments
_block_arguments=(
'get:Get a raw IPFS block.'
'put:Store input as an IPFS block.'
'rm:Remove IPFS block(s).'
'stat:Print information of a raw IPFS block.'
)
_ipfs_subcommand _block_arguments
;;
(bootstrap)
local -a _bootstrap_arguments
_bootstrap_arguments=(
'add:Add peers to the bootstrap list.'
'list:Show peers in the bootstrap list.'
'rm:Remove peers from the bootstrap list.'
)
_ipfs_subcommand _bootstrap_arguments
;;
(cat)
_arguments \
'(-o --offset)'{-o,--offset}'[Byte offset to begin reading from.]' \
'(-l --length)'{-l,--length}'[Maximum number of bytes to read.]'
;;
(cid)
local -a _cid_arguments
_cid_arguments=(
'base32:Convert CIDs to Base32 CID version 1.'
'bases:List available multibase encodings.'
'codecs:List available CID codecs.'
'format:Format and convert a CID in various useful ways.'
'hashes:List available multihashes.'
)
_ipfs_subcommand _cid_arguments
;;
(commands)
_arguments '(-f --flags)'{-f,--flags}'[Show command flags.]'
;;
(config)
_arguments \
'--bool[Set a boolean value.]' \
'--json[Parse stringified JSON.]'
local -a _config_arguments
_config_arguments=(
'edit:Open the config file for editing in $EDITOR.'
'profile:Apply profiles to config.'
'replace:Replace the config with <file>.'
'show:Output config file contents.'
)
_ipfs_subcommand _config_arguments
;;
(daemon)
_arguments \
'--init[Initialize ipfs with default settings if not already initialized.]' \
'--init-profile[Configuration profiles to apply for --init. See ipfs init --help for more.]' \
'--routing[Overrides the routing option. Default: default.]' \
'--mount[Mounts IPFS to the filesystem.]' \
'--writable[Enable writing objects (with POST, PUT and DELETE).]' \
'--mount-ipfs[Path to the mountpoint for IPFS (if using --mount). Defaults to config setting.]' \
'--mount-ipns[Path to the mountpoint for IPNS (if using --mount). Defaults to config setting.]' \
'--unrestricted-api[Allow API access to unlisted hashes.]' \
'--disable-transport-encryption[Disable transport encryption (for debugging protocols).]' \
'--enable-gc[Enable automatic periodic repo garbage collection.]' \
'--manage-fdlimit[Check and raise file descriptor limits if needed. Default: true.]' \
'--migrate[If true, assume yes at the migrate prompt. If false, assume no.]' \
'--enable-pubsub-experiment[Instantiate the ipfs daemon with the experimental pubsub feature enabled.]' \
'--enable-namesys-pubsub[Enable IPNS record distribution through pubsub; enables pubsub.]' \
'--enable-mplex-experiment[Add the experimental 'go-multiplex' stream muxer to libp2p on construction. Default: true.]'
;;
(dag)
local -a _dag_arguments
_dag_arguments=(
'get:Get a dag node from ipfs.'
'put:Add a dag node to ipfs.'
'resolve:Resolve ipld block.'
)
_ipfs_subcommand _dag_arguments
;;
(dht)
local -a _dht_arguments
_dht_arguments=(
'findpeer:Find the multiaddresses associated with a Peer ID.'
'findprovs:Find peers that can provide a specific value, given a key.'
'get:Given a key, query the routing system for its best value.'
'provide:Announce to the network that you are providing given values.'
'put:Write a key/value pair to the routing system.'
'query:Find the closest Peer IDs to a given Peer ID by querying the DHT.'
)
_ipfs_subcommand _dht_arguments
;;
(diag)
local -a _diag_arguments
_diag_arguments=(
'cmds:List commands run on this IPFS node.'
'sys:Print system diagnostic information.'
)
_ipfs_subcommand _diag_arguments
;;
(dns)
_arguments '(-r --recursive)'{-r,--recursive}'[Resolve until the result is not a DNS link. Default: true.]'
;;
(files)
_arguments '(-f --flush)'{-f,--flush}'[Flush target and ancestors after write. Default: true.]'
local -a _files_arguments
_files_arguments=(
'chcid:Change the cid version or hash function of the root node of a given path.'
'cp:Copy files into mfs.'
"flush:Flush a given path's data to disk."
'ls:List directories in the local mutable namespace.'
'mkdir:Make directories.'
'mv:Move files.'
'read:Read a file in a given mfs.'
'rm:Remove a file.'
'stat:Display file status.'
'write:Write to a mutable file in a given filesystem.'
)
_ipfs_subcommand _files_arguments
;;
(filestore)
local -a _filestore_arguments
_filestore_arguments=(
'dups:List blocks that are both in the filestore and standard block storage.'
'ls:List objects in filestore.'
'verify:Verify objects in filestore.'
)
_ipfs_subcommand _filestore_arguments
;;
(get)
_arguments \
'(-o --output)'{-o,--output}'[The path where the output should be stored.]'\
'(-a --archive)'{-a,--archive}'[Output a TAR archive.]' \
'(-C --compress)'{-C,--compress}'[Compress the output with GZIP compression.]' \
'(-l --compression-level)'{-l,--compression-level}'[The level of compression (1-9).]'
;;
(id)
_arguments '(-f --format)'{-f,--format}'[Optional output format.]'
;;
(init)
_arguments \
'(-b --bits)'{-b,--bits}'[Number of bits to use in the generated RSA private key. Default: 2048.]' \
'(-e --empty-repo)'{-e,--empty-repo}"[Don't add and pin help files to the local storage.]" \
'(-p --profile)'{-p,--profile}"[Apply profile settings to config. Multiple profiles can be separated by ','.]"
;;
(key)
local -a _key_arguments
_key_arguments=(
'gen:Create a new keypair'
'list:List all local keypairs'
'rename:Rename a keypair'
'rm:Remove a keypair'
)
_ipfs_subcommand _key_arguments
;;
(log)
local -a _log_arguments
_log_arguments=(
'level:Change the logging level.'
'ls:List the logging subsystems.'
'tail:Read the event log.'
)
_ipfs_subcommand _log_arguments
;;
(ls)
_arguments \
'(-v --headers)'{-v,--headers}'[Print table headers (Hash, Size, Name).]' \
'--resolve-type[Resolve linked objects to find out their types. Default: true.]' \
'--size[Resolve linked objects to find out their file size. Default: true.]' \
'(-s --stream)'{-s,--stream}'[Enable exprimental streaming of directory entries as they are traversed.]' \
;;
(mount)
_arguments \
'(-f --ipfs-path)'{-f,--ipfs-path}'[The path where IPFS should be mounted.]' \
'(-n --ipns-path)'{-n,--ipns-path}'[The path where IPNS should be mounted.]'
;;
(name)
local -a _name_arguments
_name_arguments=(
'publish:Publish IPNS names.'
'pubsub:IPNS pubsub management.'
'resolve:Resolve IPNS names.'
)
_ipfs_subcommand _name_arguments
;;
(object)
local -a _object_arguments
_object_arguments=(
'data:Output the raw bytes of an IPFS object.'
'diff:Display the diff between two ipfs objects.'
'get:Get and serialize the DAG node named by <key>.'
'links:Output the links pointed to by the specified object.'
'new:Create a new object from an ipfs template.'
'patch:Create a new merkledag object based on an existing one.'
'put:Store input as a DAG object, print its key.'
'stat:Get stats for the DAG node named by <key>.'
)
_ipfs_subcommand _object_arguments
;;
(p2p)
local -a _p2p_arguments
_p2p_arguments=(
'close:Stop listening for new connections to forward.'
'forward:Forward connections to libp2p service'
'listen:Create libp2p service'
'ls:List active p2p listeners.'
'stream:P2P stream management.'
)
_ipfs_subcommand _p2p_arguments
;;
(pin)
local -a _pin_arguments
_pin_arguments=(
'add:Pin objects to local storage.'
'ls:List objects pinned to local storage.'
'rm:Remove pinned objects from local storage.'
'update:Update a recursive pin'
'verify:Verify that recursive pins are complete.'
)
_ipfs_subcommand _pin_arguments
;;
(ping)
_arguments '(-n --count)'{-n,--count}'[Number of ping messages to send. Default: 10.]'
;;
(refs)
_arguments \
'--format[Emit edges with given format. Available tokens: <src> <dst> <linkname>. Default: <dst>.]' \
'(-e --edges)'{-e,--edges}'[Emit edge format: `<from> -> <to>`.]' \
'(-u --unique)'{-u,--unique}'[Omit duplicate refs from output.]' \
'(-r --recursive)'{-r,--recursive}'[Recursively list links of child nodes.]' \
'--max-depth[Only for recursive refs, limits fetch and listing to the given depth. Default: -1.]'
local -a _refs_arguments
_refs_arguments='local:List all local references.'
_ipfs_subcommand _refs_arguments
;;
(repo)
local -a _repo_arguments
_repo_arguments=(
'fsck:Remove repo lockfiles.'
'gc:Perform a garbage collection sweep on the repo.'
'stat:Get stats for the currently used repo.'
'verify:Verify all blocks in repo are not corrupted.'
'version:Show the repo version.'
)
_ipfs_subcommand _repo_arguments
;;
(resolve)
_arguments \
'(-r --recursive)'{-r,--recursive}'[Resolve until the result is an IPFS name. Default: true.]' \
'(--dhtrc --dht-record-count)'{--dhtrc,--dht-record-count}'[Number of records to request for DHT resolution.]' \
'(--dhtt --dht-timeout)'{--dhtt,--dht-timeout}'[Max time to collect values during DHT resolution eg "30s". Pass 0 for no timeout.]'
;;
(stats)
local -a _stats_arguments
_stats_arguments=(
'bitswap:Show some diagnostic information on the bitswap agent.'
'bw:Print ipfs bandwidth information.'
'repo:Get stats for the currently used repo.'
)
_ipfs_subcommand _stats_arguments
;;
(swarm)
local -a _swarm_arguments
_swarm_arguments=(
'addrs:List known addresses. Useful for debugging.'
'connect:Open connection to a given address.'
'disconnect:Close connection to a given address.'
'filters:Manipulate address filters.'
'peers:List peers with open connections.'
)
_ipfs_subcommand _swarm_arguments
;;
(tar)
local -a _tar_arguments
_tar_arguments=(
'add:Import a tar file into ipfs.'
'cat:Export a tar file from IPFS.'
)
_ipfs_subcommand _tar_arguments
;;
(version)
_arguments \
'(-n --number)'{-n,--number}'[Only show the version number.]' \
'--commit[Show the commit hash.]' \
'--repo[Show repo version.]' \
'--all[Show all version information.]'
;;
esac

View File

@ -0,0 +1,17 @@
## sublime-merge
Plugin for Sublime Merge, a cross platform text and code editor, available for Linux, Mac OS X, and Windows.
### Requirements
* [Sublime Merge](https://www.sublimemerge.com)
### Usage
* If `sm` command is called without an argument, launch Sublime Merge
* If `sm` is passed a directory, `cd` to it and open the existing git repository in Sublime Merge
* If `smt` command is called, it is equivalent to `sm .`, opening the existing git repository in the current folder in Sublime Merge
* If `ssm` command is called, it is like `sudo sm`, opening the git repository in Sublime Merge. Useful for editing system protected repositories.

View File

@ -0,0 +1,55 @@
# Sublime Merge Aliases
() {
if [[ "$OSTYPE" == linux* ]]; then
local _sublime_linux_paths
_sublime_linux_paths=(
"$HOME/bin/sublime_merge"
"/opt/sublime_merge/sublime_merge"
"/usr/bin/sublime_merge"
"/usr/local/bin/sublime_merge"
"/usr/bin/sublime_merge"
"/usr/local/bin/smerge"
"/usr/bin/smerge"
)
for _sublime_merge_path in $_sublime_linux_paths; do
if [[ -a $_sublime_merge_path ]]; then
sm_run() { $_sublime_merge_path "$@" >/dev/null 2>&1 &| }
ssm_run_sudo() {sudo $_sublime_merge_path "$@" >/dev/null 2>&1}
alias ssm=ssm_run_sudo
alias sm=sm_run
break
fi
done
elif [[ "$OSTYPE" = darwin* ]]; then
local _sublime_darwin_paths
_sublime_darwin_paths=(
"/usr/local/bin/smerge"
"/Applications/Sublime Merge.app/Contents/SharedSupport/bin/smerge"
"$HOME/Applications/Sublime Merge.app/Contents/SharedSupport/bin/smerge"
)
for _sublime_merge_path in $_sublime_darwin_paths; do
if [[ -a $_sublime_merge_path ]]; then
subm () { "$_sublime_merge_path" "$@" }
alias sm=subm
break
fi
done
elif [[ "$OSTYPE" = 'cygwin' ]]; then
local sublime_merge_cygwin_paths
sublime_merge_cygwin_paths=(
"$(cygpath $ProgramW6432/Sublime\ Merge)/sublime_merge.exe"
)
for _sublime_merge_path in $_sublime_merge_cygwin_paths; do
if [[ -a $_sublime_merge_path ]]; then
subm () { "$_sublime_merge_path" "$@" }
alias sm=subm
break
fi
done
fi
}
alias smt='sm .'