tarfeef101 3 gadi atpakaļ
revīzija
47de89abe0
4 mainītis faili ar 41 papildinājumiem un 0 dzēšanām
  1. 5 0
      Dockerfile
  2. 9 0
      README.md
  3. 19 0
      docker-compose.yaml
  4. 8 0
      entrypoint.sh

+ 5 - 0
Dockerfile

@@ -0,0 +1,5 @@
+FROM alpine:latest
+RUN apk add --no-cache aws-cli
+VOLUME /opt
+COPY entrypoint.sh /root/entrypoint.sh
+CMD /root/entrypoint.sh

+ 9 - 0
README.md

@@ -0,0 +1,9 @@
+# aws_s3_sidecar
+## Purpose
+Run this container as a sidecar to other services in a deployment environment like an ECS cluster/service/task, K8S cluster/pod, etc. The container can be used to pull config files from S3 and expose them so that your other containers can access them through a shared volume without relying on persistent storage. This is useful for situations where you may be deploying someone else's images (like a vendor) and don't want to extend them and deal with hosting your own repos, etc. but you need to bring in configuration files or something similar to get them working.
+## Usage
+Deploy the container in an environment where it inherits an IAM role (i.e. an ECS task role), or provide the environment variables necessary to get `aws-cli` to pick up the credentials. To specify what files to expose, set the following environment variables:
+- `BUCKET` should be the bucket you wish to retrieve the files from
+- `FILES` is a `|`-delimited list of files within the bucket you wish to be exposed. Their full key will be used
+
+The files will be exposed in a volume in `/opt`, which you should then mount in to any other containers you wish to acccess the files.

+ 19 - 0
docker-compose.yaml

@@ -0,0 +1,19 @@
+version: '3'
+
+services:
+  sidecar:
+    image: tarfeef101/s3_sidecar:latest
+    build: .
+    logging:
+      driver: "json-file"
+      options:
+        max-size: "200k"
+        max-file: "1"
+    environment:
+      - AWS_ACCESS_KEY_ID=
+      - AWS_SECRET_ACCESS_KEY=
+      - AWS_SESSION_TOKEN=
+      - FILES=
+      - BUCKET=
+    volumes:
+      - ./mount:/opt

+ 8 - 0
entrypoint.sh

@@ -0,0 +1,8 @@
+#!/bin/sh
+OLDIFS=$IFS
+IFS='|'
+for each in $FILES; do
+  aws s3 cp s3://$BUCKET/$each /opt/$each
+done
+IFS=$OLDIFS
+tail -f /dev/null