4 [Minio](https://minio.io) is a distributed object storage service for high performance, high scale data infrastructures. It is a drop in replacement for AWS S3 in your own environment. It uses erasure coding to provide highly resilient storage that can tolerate failures of upto n/2 nodes. It runs on cloud, container, kubernetes and bare-metal environments. It is simple enough to be deployed in seconds, and can scale to 100s of peta bytes. Minio is suitable for storing objects such as photos, videos, log files, backups, VM and container images.
6 Minio supports [distributed mode](https://docs.minio.io/docs/distributed-minio-quickstart-guide). In distributed mode, you can pool multiple drives (even on different machines) into a single object storage server.
11 This chart bootstraps Minio deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
16 - Kubernetes 1.4+ with Beta APIs enabled for default standalone mode.
17 - Kubernetes 1.5+ with Beta APIs enabled to run Minio in [distributed mode](#distributed-minio).
18 - PV provisioner support in the underlying infrastructure.
23 Install this chart using:
26 $ helm install stable/minio
29 The command deploys Minio on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
33 An instance of a chart running in a Kubernetes cluster is called a release. Each release is identified by a unique name within the cluster. Helm automatically assigns a unique release name after installing the chart. You can also set your preferred name by:
36 $ helm install --name my-release stable/minio
39 ### Access and Secret keys
41 By default a pre-generated access and secret key will be used. To override the default keys, pass the access and secret keys as arguments to helm install.
44 $ helm install --set accessKey=myaccesskey,secretKey=mysecretkey \
48 ### Updating Minio configuration via Helm
50 [ConfigMap](https://kubernetes.io/docs/user-guide/configmap/) allows injecting containers with configuration data even while a Helm release is deployed.
52 To update your Minio server configuration while it is deployed in a release, you need to
54 1. Check all the configurable values in the Minio chart using `helm inspect values stable/minio`.
55 2. Override the `minio_server_config` settings in a YAML formatted file, and then pass that file like this `helm upgrade -f config.yaml stable/minio`.
56 3. Restart the Minio server(s) for the changes to take effect.
58 You can also check the history of upgrades to a release using `helm history my-release`. Replace `my-release` with the actual release name.
60 Uninstalling the Chart
61 ----------------------
63 Assuming your release is named as `my-release`, delete it using the command:
66 $ helm delete my-release
69 The command removes all the Kubernetes components associated with the chart and deletes the release.
74 You can use Helm to update Minio version in a live release. Assuming your release is named as `my-release`, get the values using the command:
77 $ helm get values my-release > old_values.yaml
80 Then change the field `image.tag` in `old_values.yaml` file with Minio image tag you want to use. Now update the chart using
83 $ helm upgrade -f old_values.yaml my-release stable/minio
86 Default upgrade strategies are specified in the `values.yaml` file. Update these fields if you'd like to use a different strategy.
91 The following table lists the configurable parameters of the Minio chart and their default values.
93 | Parameter | Description | Default |
94 |----------------------------|-------------------------------------|---------------------------------------------------------|
95 | `image.repository` | Image repository | `minio/minio` |
96 | `image.tag` | Minio image tag. Possible values listed [here](https://hub.docker.com/r/minio/minio/tags/).| `RELEASE.2019-02-12T21-58-47Z`|
97 | `image.pullPolicy` | Image pull policy | `IfNotPresent` |
98 | `mcImage.repository` | Client image repository | `minio/mc` |
99 | `mcImage.tag` | mc image tag. Possible values listed [here](https://hub.docker.com/r/minio/mc/tags/).| `RELEASE.2019-02-13T19-48-27Z`|
100 | `mcImage.pullPolicy` | mc Image pull policy | `IfNotPresent` |
101 | `ingress.enabled` | Enables Ingress | `false` |
102 | `ingress.annotations` | Ingress annotations | `{}` |
103 | `ingress.hosts` | Ingress accepted hostnames | `[]` |
104 | `ingress.tls` | Ingress TLS configuration | `[]` |
105 | `mode` | Minio server mode (`standalone` or `distributed`)| `standalone` |
106 | `replicas` | Number of nodes (applicable only for Minio distributed mode). Should be 4 <= x <= 32 | `4` |
107 | `existingSecret` | Name of existing secret with access and secret key.| `""` |
108 | `accessKey` | Default access key (5 to 20 characters) | `AKIAIOSFODNN7EXAMPLE` |
109 | `secretKey` | Default secret key (8 to 40 characters) | `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY` |
110 | `configPath` | Default config file location | `~/.minio` |
111 | `configPathmc` | Default config file location for minio client - mc | `~/.mc` |
112 | `mountPath` | Default mount location for persistent drive| `/export` |
113 | `clusterDomain` | domain name of kubernetes cluster where pod is running.| `cluster.local` |
114 | `service.type` | Kubernetes service type | `ClusterIP` |
115 | `service.port` | Kubernetes port where service is exposed| `9000` |
116 | `service.externalIPs` | service external IP addresses | `nil` |
117 | `service.annotations` | Service annotations | `{}` |
118 | `persistence.enabled` | Use persistent volume to store data | `true` |
119 | `persistence.size` | Size of persistent volume claim | `10Gi` |
120 | `persistence.existingClaim`| Use an existing PVC to persist data | `nil` |
121 | `persistence.storageClass` | Storage class name of PVC | `nil` |
122 | `persistence.accessMode` | ReadWriteOnce or ReadOnly | `ReadWriteOnce` |
123 | `persistence.subPath` | Mount a sub directory of the persistent volume if set | `""` |
124 | `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `100m` |
125 | `priorityClassName` | Pod priority settings | `""` |
126 | `nodeSelector` | Node labels for pod assignment | `{}` |
127 | `affinity` | Affinity settings for pod assignment | `{}` |
128 | `tolerations` | Toleration labels for pod assignment | `[]` |
129 | `podAnnotations` | Pod annotations | `{}` |
130 | `tls.enabled` | Enable TLS for Minio server | `false` |
131 | `tls.certSecret` | Kubernetes Secret with `public.crt` and `private.key` files. | `""` |
132 | `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `5` |
133 | `livenessProbe.periodSeconds` | How often to perform the probe | `30` |
134 | `livenessProbe.timeoutSeconds` | When the probe times out | `1` |
135 | `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
136 | `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `3` |
137 | `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `5` |
138 | `readinessProbe.periodSeconds` | How often to perform the probe | `15` |
139 | `readinessProbe.timeoutSeconds` | When the probe times out | `1` |
140 | `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
141 | `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `3` |
142 | `defaultBucket.enabled` | If set to true, a bucket will be created after minio install | `false` |
143 | `defaultBucket.name` | Bucket name | `bucket` |
144 | `defaultBucket.policy` | Bucket policy | `none` |
145 | `defaultBucket.purge` | Purge the bucket if already exists | `false` |
146 | `buckets` | List of buckets to create after minio install | `[]` |
147 | `s3gateway.enabled` | Use minio as a [s3 gateway](https://github.com/minio/minio/blob/master/docs/gateway/s3.md)| `false` |
148 | `s3gateway.replicas` | Number of s3 gateway instances to run in parallel | `4` |
149 | `s3gateway.serviceEndpoint`| Endpoint to the S3 compatible service | `""` |
150 | `azuregateway.enabled` | Use minio as an [azure gateway](https://docs.minio.io/docs/minio-gateway-for-azure)| `false` |
151 | `gcsgateway.enabled` | Use minio as a [Google Cloud Storage gateway](https://docs.minio.io/docs/minio-gateway-for-gcs)| `false` |
152 | `gcsgateway.gcsKeyJson` | credential json file of service account key | `""` |
153 | `gcsgateway.projectId` | Google cloud project id | `""` |
154 | `ossgateway.enabled` | Use minio as an [Alibaba Cloud Object Storage Service gateway](https://github.com/minio/minio/blob/master/docs/gateway/oss.md)| `false` |
155 | `ossgateway.replicas` | Number of oss gateway instances to run in parallel | `4` |
156 | `ossgateway.endpointURL` | OSS server endpoint. | `""` |
157 | `nasgateway.enabled` | Use minio as a [NAS gateway](https://docs.minio.io/docs/minio-gateway-for-nas) | `false` |
158 | `nasgateway.replicas` | Number of NAS gateway instances to be run in parallel on a PV | `4` |
159 | `environment` | Set Minio server relevant environment variables in `values.yaml` file. Minio containers will be passed these variables when they start. | `MINIO_BROWSER: "on"` |
161 Some of the parameters above map to the env variables defined in the [Minio DockerHub image](https://hub.docker.com/r/minio/minio/).
163 You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
166 $ helm install --name my-release \
167 --set persistence.size=100Gi \
171 The above command deploys Minio server with a 100Gi backing persistent volume.
173 Alternately, you can provide a YAML file that specifies parameter values while installing the chart. For example,
176 $ helm install --name my-release -f values.yaml stable/minio
179 > **Tip**: You can use the default [values.yaml](values.yaml)
184 This chart provisions a Minio server in standalone mode, by default. To provision Minio server in [distributed mode](https://docs.minio.io/docs/distributed-minio-quickstart-guide), set the `mode` field to `distributed`,
187 $ helm install --set mode=distributed stable/minio
190 This provisions Minio server in distributed mode with 4 nodes. To change the number of nodes in your distributed Minio server, set the `replicas` field,
193 $ helm install --set mode=distributed,replicas=8 stable/minio
196 This provisions Minio server in distributed mode with 8 nodes. Note that the `replicas` value should be an integer between 4 and 16 (inclusive).
198 ### StatefulSet [limitations](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/#limitations) applicable to distributed Minio
200 1. StatefulSets need persistent storage, so the `persistence.enabled` flag is ignored when `mode` is set to `distributed`.
201 2. When uninstalling a distributed Minio release, you'll need to manually delete volumes associated with the StatefulSet.
208 Minio in [NAS gateway mode](https://docs.minio.io/docs/minio-gateway-for-nas) can be used to create multiple Minio instances backed by single PV in `ReadWriteMany` mode. Currently few [Kubernetes volume plugins](https://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes) support `ReadWriteMany` mode. To deploy Minio NAS gateway with Helm chart you'll need to have a Persistent Volume running with one of the supported volume plugins. [This document](https://kubernetes.io/docs/user-guide/volumes/#nfs)
209 outlines steps to create a NFS PV in Kubernetes cluster.
211 ### Provision NAS Gateway Minio instances
213 To provision Minio servers in [NAS gateway mode](https://docs.minio.io/docs/minio-gateway-for-nas), set the `nasgateway.enabled` field to `true`,
216 $ helm install --set nasgateway.enabled=true stable/minio
219 This provisions 4 Minio NAS gateway instances backed by single storage. To change the number of instances in your Minio deployment, set the `replicas` field,
222 $ helm install --set nasgateway.enabled=true,nasgateway.replicas=8 stable/minio
225 This provisions Minio NAS gateway with 8 instances.
230 This chart provisions a PersistentVolumeClaim and mounts corresponding persistent volume to default location `/export`. You'll need physical storage available in the Kubernetes cluster for this to work. If you'd rather use `emptyDir`, disable PersistentVolumeClaim by:
233 $ helm install --set persistence.enabled=false stable/minio
236 > *"An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever."*
238 Existing PersistentVolumeClaim
239 ------------------------------
241 If a Persistent Volume Claim already exists, specify it during installation.
243 1. Create the PersistentVolume
244 2. Create the PersistentVolumeClaim
248 $ helm install --set persistence.existingClaim=PVC_NAME stable/minio
254 To enable network policy for Minio,
255 install [a networking plugin that implements the Kubernetes
256 NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin),
257 and set `networkPolicy.enabled` to `true`.
259 For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting
260 the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace:
262 kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"
264 With NetworkPolicy enabled, traffic will be limited to just port 9000.
266 For more precise policy, set `networkPolicy.allowExternal=true`. This will
267 only allow pods with the generated client label to connect to Minio.
268 This label will be displayed in the output of a successful install.
273 Instead of having this chart create the secret for you, you can supply a preexisting secret, much
274 like an existing PersistentVolumeClaim.
276 First, create the secret:
278 $ kubectl create secret generic my-minio-secret --from-literal=accesskey=foobarbaz --from-literal=secretkey=foobarbazqux
281 Then install the chart, specifying that you want to use an existing secret:
283 $ helm install --set existingSecret=my-minio-secret stable/minio
286 The following fields are expected in the secret
287 1. `accesskey` - the access key ID
288 2. `secretkey` - the secret key
289 3. `gcs_key.json` - The GCS key if you are using the GCS gateway feature. This is optional.
294 To enable TLS for Minio containers, acquire TLS certificates from a CA or create self-signed certificates. While creating / acquiring certificates ensure the corresponding domain names are set as per the standard [DNS naming conventions](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity) in a Kubernetes StatefulSet (for a distributed Minio setup). Then create a secret using
297 $ kubectl create secret generic tls-ssl-minio --from-file=path/to/private.key --from-file=path/to/public.crt
300 Then install the chart, specifying that you want to use the TLS secret:
303 $ helm install --set tls.enabled=true,tls.certSecret=tls-ssl-minio stable/minio
306 Pass environment variables to Minio containers
307 ----------------------------------------------
309 To pass environment variables to Minio containers when deploying via Helm chart, use the below command line format
312 $ helm install --set environment.MINIO_BROWSER=on,environment.MINIO_DOMAIN=domain-name stable/minio
315 You can add as many environment variables as required, using the above format. Just add `environment.<VARIABLE_NAME>=<value>` under `set` flag.
317 Create buckets after install
318 ---------------------------
320 Install the chart, specifying the buckets you want to create after install:
323 $ helm install --set buckets[0].name=bucket1,buckets[0].policy=none,buckets[0].purge=false stable/minio
326 Description of the configuration parameters used above -
327 1. `buckets[].name` - name of the bucket to create, must be a string with length > 0
328 2. `buckets[].policy` - Can be one of none|download|upload|public
329 3. `buckets[].purge` - Purge if bucket exists already