Skip to content
This repository was archived by the owner on Feb 1, 2021. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Godeps/Godeps.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

29 changes: 28 additions & 1 deletion scheduler/filter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ without specifying them when starting the node. Those tags are sourced from

#### Containers

You can schedule 2 containers and make the container #2 next to the container #1.
You can schedule 2 containers and make the container #2 next to the container #1 using it's name or ID.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc feedback - how about:

You can force containers to be co-scheduled on the same node as another container by specifying its name or ID.

/cc @moxiegirl


```bash
$ docker run -d -p 80:80 --name front nginx
Expand Down Expand Up @@ -164,6 +164,33 @@ CONTAINER ID IMAGE COMMAND CREATED

As you can see here, the containers were only scheduled on nodes with the `redis` image already pulled.

#### Labels

You can schedule 2 containers and make the container #2 next to the container #1 using it's labels.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can also force containers to be co-scheduled on the same node as another container by specifying its labels.

/cc @moxiegirl


```bash
$ docker run -d -p 80:80 --label com.example.type=front nginx
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: frontend?

87c4376856a8

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
```

Using `-e affinity:com.example.type==front` will schedule a container next to the container with label `front`.

```bash
$ docker run -d -e affinity:com.example.type==front logger
87c4376856a8

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
963841b138d8 logger:latest "logger" Less than a second ago running node-1 happy_hawking
```

The `logger` container ends up on `node-1` because its affinity with the container `front`.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logger container ends up on node-1 since that node has a container (nginx) labeled as com.example.type=front

/cc @moxiegirl


#### Expression Syntax

An affinity or a constraint expression consists of a `key` and a `value`.
Expand Down
9 changes: 9 additions & 0 deletions scheduler/filter/affinity.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,15 @@ func (f *AffinityFilter) Filter(config *dockerclient.ContainerConfig, nodes []*n
if affinity.Match(images...) {
candidates = append(candidates, node)
}
default:
labels := []string{}
for _, container := range node.Containers {
labels = append(labels, container.Labels[affinity.key])
}
if affinity.Match(labels...) {
candidates = append(candidates, node)
}

}
}
if len(candidates) == 0 {
Expand Down
2 changes: 1 addition & 1 deletion scheduler/filter/expr.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ func parseExprs(key string, env []string) ([]expr, error) {

// validate key
// allow alpha-numeric
matched, err := regexp.MatchString(`^(?i)[a-z_][a-z0-9\-_]+$`, parts[0])
matched, err := regexp.MatchString(`^(?i)[a-z_][a-z0-9\-_.]+$`, parts[0])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A tiny bit unsure about the effect on the regex: . is supposed to mean any character.

Is this being interpreted as a dot or as any character and do we need to escape it?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll test, but I'm not sure it counts as any char when it's within []

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems good to me

if err != nil {
return nil, err
}
Expand Down
8 changes: 4 additions & 4 deletions scheduler/filter/expr_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@ func TestParseExprs(t *testing.T) {
_, err = parseExprs("constraint", []string{"constraint:node ==node1"})
assert.Error(t, err)

// Cannot use dot in key
_, err = parseExprs("constraint", []string{"constraint:no.de==node1"})
assert.Error(t, err)

// Cannot use * in key
_, err = parseExprs("constraint", []string{"constraint:no*de==node1"})
assert.Error(t, err)

// Allow dot in key
_, err = parseExprs("constraint", []string{"constraint:no.de==node1"})
assert.NoError(t, err)

// Allow leading underscore
_, err = parseExprs("constraint", []string{"constraint:_node==_node1"})
assert.NoError(t, err)
Expand Down
56 changes: 56 additions & 0 deletions test/integration/affinities.bats
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
#!/usr/bin/env bats

load helpers

function teardown() {
swarm_manage_cleanup
stop_docker
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind adding an image affinity test since you already made most of the work?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes!

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@test "container affinty" {
start_docker 2
swarm_manage

run docker_swarm run --name c1 -e constraint:node==node-0 -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c2 -e affinity:container==c1 -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c3 -e affinity:container!=c1 -d busybox:latest sh
[ "$status" -eq 0 ]

run docker_swarm inspect c1
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c2
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c3
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-1"'* ]]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps match this against output != node-0? That way it'd work even if we launch 3 docker engines later on.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

}

@test "label affinity" {
start_docker 2
swarm_manage

run docker_swarm run --name c1 --label test.label=true -e constraint:node==node-0 -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c2 -e affinity:test.label==true -d busybox:latest sh
[ "$status" -eq 0 ]
run docker_swarm run --name c3 -e affinity:test.label!=true -d busybox:latest sh
[ "$status" -eq 0 ]

run docker_swarm inspect c1
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c2
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-0"'* ]]

run docker_swarm inspect c3
[ "$status" -eq 0 ]
[[ "${output}" == *'"Name": "node-1"'* ]]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same feedback

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

}
4 changes: 2 additions & 2 deletions test/integration/helpers.bash
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
SWARM_ROOT=${SWARM_ROOT:-${BATS_TEST_DIRNAME}/../..}

# Docker image and version to use for integration tests.
DOCKER_IMAGE=${DOCKER_IMAGE:-aluzzardi/docker}
DOCKER_VERSION=${DOCKER_VERSION:-1.5}
DOCKER_IMAGE=${DOCKER_IMAGE:-dockerswarm/docker}
DOCKER_VERSION=${DOCKER_VERSION:-1.6}

# Host on which the manager will listen to (random port between 6000 and 7000).
SWARM_HOST=127.0.0.1:$(( ( RANDOM % 1000 ) + 6000 ))
Expand Down