Skip to content
This repository was archived by the owner on Feb 1, 2021. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Godeps/Godeps.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

82 changes: 69 additions & 13 deletions scheduler/filter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,48 +97,65 @@ without specifying them when starting the node. Those tags are sourced from
* kernelversion
* operatingsystem

## Affinity Filter
## Affinity filter

#### Containers
You use an `--affinity:<filter>` to create "attractions" between containers. For
example, you can run a container and instruct it to locate and run next to
another container based on an identifier, an image, or a label. These
attractions ensure that containers run on the same network node &mdash; without
you having to know what each node is running.

You can schedule 2 containers and make the container #2 next to the container #1.
#### Container affinity

You can schedule a new container to run next to another based on a container
name or ID. For example, you can start a container called `frontend` running
`nginx`:

```bash
$ docker run -d -p 80:80 --name front nginx
87c4376856a8


$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 front
```

Using `-e affinity:container==front` will schedule a container next to the container `front`.
You can also use IDs instead of name: `-e affinity:container==87c4376856a8`
Then, using `-e affinity:container==frontend` flag schedule a second container to
locate and run next to `frontend`.

```bash
$ docker run -d --name logger -e affinity:container==front logger
$ docker run -d --name logger -e affinity:container==frontend logger
87c4376856a8

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 front
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 frontend
963841b138d8 logger:latest "logger" Less than a second ago running node-1 logger
```

The `logger` container ends up on `node-1` because its affinity with the container `front`.
Because of name affinity, the `logger` container ends up on `node-1` along with
the `frontend` container. Instead of the `frontend` name you could have supplied its
ID as follows:

```bash
docker run -d --name logger -e affinity:container==87c4376856a8`
```


#### Images
#### Image affinity

You can schedule a container only on nodes where a specific image is already pulled.
You can schedule a container to run only on nodes where a specific image is already pulled.

```bash
$ docker -H node-1:2375 pull redis
$ docker -H node-2:2375 pull mysql
$ docker -H node-3:2375 pull redis
```

Here only `node-1` and `node-3` have the `redis` image. Using `-e affinity:image=redis` we can
schedule container only on these 2 nodes. You can also use the image ID instead of its name.
Only `node-1` and `node-3` have the `redis` image. Specify a `-e
affinity:image=redis` filter to schedule several additional containers to run on
these nodes.

```bash
$ docker run -d --name redis1 -e affinity:image==redis redis
Expand All @@ -162,7 +179,46 @@ CONTAINER ID IMAGE COMMAND CREATED
963841b138d8 redis:latest "redis" Less than a second ago running node-1 redis8
```

As you can see here, the containers were only scheduled on nodes with the `redis` image already pulled.
As you can see here, the containers were only scheduled on nodes that had the
`redis` image. Instead of the image name, you could have specified the image ID.

```bash
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
redis latest 06a1f75304ba 2 days ago 111.1 MB

$ docker run -d --name redis1 -e affinity:image==06a1f75304ba redis
```


#### Label affinity

Label affinity allows you to set up an attraction based on a container's label.
For example, you can run a `nginx` container with the `com.example.type=frontend` label.

```bash
$ docker run -d -p 80:80 --label com.example.type=frontend nginx
87c4376856a8

$ docker ps --filter "label=com.example.type=front"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
```

Then, use `-e affinity:com.example.type==frontend` to schedule a container next to
the container with the `com.example.type==frontend` label.

```bash
$ docker run -d -e affinity:com.example.type==frontend logger
87c4376856a8

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
963841b138d8 logger:latest "logger" Less than a second ago running node-1 happy_hawking
```

The `logger` container ends up on `node-1` because its affinity with the `com.example.type==frontend` label.

#### Expression Syntax

Expand Down
9 changes: 9 additions & 0 deletions scheduler/filter/affinity.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,15 @@ func (f *AffinityFilter) Filter(config *dockerclient.ContainerConfig, nodes []*n
if affinity.Match(images...) {
candidates = append(candidates, node)
}
default:
labels := []string{}
for _, container := range node.Containers {
labels = append(labels, container.Labels[affinity.key])
}
if affinity.Match(labels...) {
candidates = append(candidates, node)
}

}
}
if len(candidates) == 0 {
Expand Down
2 changes: 1 addition & 1 deletion scheduler/filter/expr.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ func parseExprs(key string, env []string) ([]expr, error) {

// validate key
// allow alpha-numeric
matched, err := regexp.MatchString(`^(?i)[a-z_][a-z0-9\-_]+$`, parts[0])
matched, err := regexp.MatchString(`^(?i)[a-z_][a-z0-9\-_.]+$`, parts[0])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A tiny bit unsure about the effect on the regex: . is supposed to mean any character.

Is this being interpreted as a dot or as any character and do we need to escape it?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll test, but I'm not sure it counts as any char when it's within []

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems good to me

if err != nil {
return nil, err
}
Expand Down
16 changes: 12 additions & 4 deletions scheduler/filter/expr_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,22 @@ func TestParseExprs(t *testing.T) {
_, err = parseExprs("constraint", []string{"constraint:node ==node1"})
assert.Error(t, err)

// Cannot use dot in key
_, err = parseExprs("constraint", []string{"constraint:no.de==node1"})
assert.Error(t, err)

// Cannot use * in key
_, err = parseExprs("constraint", []string{"constraint:no*de==node1"})
assert.Error(t, err)

// Cannot use $ in key
_, err = parseExprs("constraint", []string{"constraint:no$de==node1"})
assert.Error(t, err)

// Allow CAPS in key
_, err = parseExprs("constraint", []string{"constraint:NoDe==node1"})
assert.NoError(t, err)

// Allow dot in key
_, err = parseExprs("constraint", []string{"constraint:no.de==node1"})
assert.NoError(t, err)

// Allow leading underscore
_, err = parseExprs("constraint", []string{"constraint:_node==_node1"})
assert.NoError(t, err)
Expand Down
76 changes: 76 additions & 0 deletions test/integration/affinities.bats
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
#!/usr/bin/env bats

load helpers

function teardown() {
swarm_manage_cleanup
stop_docker
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind adding an image affinity test since you already made most of the work?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes!

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@test "container affinty" {
start_docker 2
swarm_manage

run docker_swarm run --name c1 -e constraint:node==node-0 -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c2 -e affinity:container==c1 -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c3 -e affinity:container!=c1 -d busybox:latest sh
[ "$status" -eq 0 ]

run docker_swarm inspect c1
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c2
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c3
[ "$status" -eq 0 ]
[[ "${output}" != *'"Name": "node-0"'* ]]
}

@test "image affinity" {
start_docker 2
swarm_manage

run docker -H ${HOSTS[0]} pull busybox
[ "$status" -eq 0 ]
run docker_swarm run --name c1 -e affinity:image==busybox -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c2 -e affinity:image!=busybox -d busybox:latest sh
[ "$status" -eq 0 ]

run docker_swarm inspect c1
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c2
[ "$status" -eq 0 ]
[[ "${output}" != *'"Name": "node-0"'* ]]
}

@test "label affinity" {
start_docker 2
swarm_manage

run docker_swarm run --name c1 --label test.label=true -e constraint:node==node-0 -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c2 -e affinity:test.label==true -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c3 -e affinity:test.label!=true -d busybox:latest sh
[ "$status" -eq 0 ]

run docker_swarm inspect c1
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c2
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c3
[ "$status" -eq 0 ]
[[ "${output}" != *'"Name": "node-0"'* ]]
}
4 changes: 2 additions & 2 deletions test/integration/helpers.bash
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
SWARM_ROOT=${SWARM_ROOT:-${BATS_TEST_DIRNAME}/../..}

# Docker image and version to use for integration tests.
DOCKER_IMAGE=${DOCKER_IMAGE:-aluzzardi/docker}
DOCKER_VERSION=${DOCKER_VERSION:-1.5}
DOCKER_IMAGE=${DOCKER_IMAGE:-dockerswarm/docker}
DOCKER_VERSION=${DOCKER_VERSION:-1.6}

# Host on which the manager will listen to (random port between 6000 and 7000).
SWARM_HOST=127.0.0.1:$(( ( RANDOM % 1000 ) + 6000 ))
Expand Down