Skip to content

Cannot get new connection from pool when same pipeline read and write to same index #133

Closed
@andsel

Description

@andsel

It happens that when we have a pipeline composed of elasticsearch input that search all on an index and an elasticsearch output that deletes the documents just loaded from input we get this error:

Elasticsearch::Transport::Transport::Error: Cannot get new connection from pool.
  perform_request at /tmp/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:254
  perform_request at /tmp/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/manticore.rb:67
  perform_request at /tmp/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/client.rb:131
           search at /tmp/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-5.0.5/lib/elasticsearch/api/actions/search.rb:183
   search_request at /tmp/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.6.0/lib/logstash/inputs/elasticsearch.rb:321
     do_run_slice at /tmp/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.6.0/lib/logstash/inputs/elasticsearch.rb:269
           do_run at /tmp/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.6.0/lib/logstash/inputs/elasticsearch.rb:257
  • Version: 4.6.0
  • Operating System: Linux
  • Steps to Reproduce:
  • run an elasticsearch node with docker
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "ELASTIC_PASSWORD=changeme" docker.elastic.co/elasticsearch/elasticsearch:7.8.1
  • fill 10^6 docs with data_filler_pipeline.conf later attached
bin/logstash -f data_filler_pipeline.conf
  • use a pipeline similar to the one provided by the customer
bin/logstash -f sut_pipeline.conf

Pipeline used to fill data:

input {
  generator {
    message => "sample data"
    count => 1000000
  }
}

output {
  elasticsearch {
    index => "test_data"
    user => "elastic"
    password => "changeme"
  }
}

Pipeline used to test the problem:

input {
  elasticsearch {
    docinfo => true
    docinfo_fields => ["_id"]
    query => '{ "query": { "match_all": {} } }'
    scroll => "5m"
    slices => 2
    
    index => "test_data"
	user => "elastic"
    password => "changeme"
  }
}

output {
  elasticsearch {
	action => "delete"
	document_id => "%{[@metadata][_id]}"
	http_compression => true
	pool_max => 500
	pool_max_per_route => 80

	index => "test_data"
	user => "elastic"
    password => "changeme"
  }
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions