Skip to content

Conversation

@sboschman
Copy link

Make it possible to generate unique port numbers for the entire Jenkins cluster instead of unique port numbers per node.
Use case: jobs running on different Jenkins nodes can start an application container for integration testing on the same remote host. The application container is supplied by a Docker container and is started on a Docker cluster.

@jenkinsadmin
Copy link
Member

Thank you for a pull request! Please check this document for how the Jenkins project handles pull requests

@tszpinda
Copy link

hi guys are you planning to get this merged? I got exactly the same use case.

@sboschman
Copy link
Author

maybe you can drop a note on the dev list in support of this feature to get it merged (https://groups.google.com/forum/#!searchin/jenkinsci-dev/port-allocator/jenkinsci-dev/I9RIMAHqqTg/yTl248ScHKgJ) ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello,

My main concern with this PR is that we are "allocating" ports on slaves only checking its availability in the master. I may have misunderstood the use case, or maybe it not in the scope of this plugin.

Please open a JIRA issue and/or a mailing list thread to elaborate.

Thanks.

@elferink
Copy link

@andresrc If the port gets used by the node but not on the node itself you can get issues.

Example:
Current situation:
Node A -> Job 1 gets port 8000 -> port gets used on e.g. docker-swarm cluster -> Docker-node A port 8000
Node B -> Job 2 gets port 8000 -> port gets used on e.g. docker-swarm cluster -> Docker-node A port 8000

With this PR:
Node A -> Job 1 gets port 8000 -> port gets used on e.g. docker-swarm cluster -> Docker-node A port 8000
Node B -> Job 2 gets port 8001 -> port gets used on e.g. docker-swarm cluster -> Docker-node A port 8001

@andresrc
Copy link

Thanks @elferink , in that case I had understood correctly. The main concern then is that with the approach used in this PR, we are actually "reserving" the ports on the master, where they are not going to be used neither. An approach similar to the one used in #7 , where an specific data structure is used. However, there also other concerns in that PR as stated there.

Thanks!

@elferink
Copy link

@andresrc It's not that were reserving ports on the master itself but the master contains a pool.
Maybe the wording could be different but the need for a feature like this should de evident enough in that there are 2 PR's trying to achieve this.

#7 calls the option, 'global visibility' which indicates quite clearly what it does. Also there is the possibility of adding a help text.
Currently in this PR it looks like this:

schermafbeelding 2015-11-26 om 11 33 34

@jlrigau
Copy link

jlrigau commented Nov 26, 2015

@elferink 👍

If necessary I can adjust the help text on #7

@andresrc
Copy link

Thanks again for your comments.

In this PR, the port is actually being reserved on the master: reserved in the sense that it isn't available no any job (including those unrelated to the ones using the pool).

Even without this issue (which is solved differently in #7 ), let me insist on the scope perspective: the goal of this plugin is to coordinate the allocation of physical ports in nodes performing execution.

The coordination of other kind of resources, including external resources and pools of those resources, is out of the scope of this plugin, even if they happen to be ports, but are external.

The already generated expectation for the users of this plugin is that we are checking "physical" availability of the ports in every slave. The inclusion of this coordination of external resources will be confusing for the users not facing the target use case, as they may end up with the same allocation number in two jobs in the same computer.

As I said before, the use case is legit, but IMHO this plugin is not the place to solve it.

@subos2008
Copy link

If we forked this plugin and just changed the name and made Global the default would people accept that? There is currently no way to allocate from server side pools in Jenkins and that's a huge pain when you're testing cloud services.

@subos2008
Copy link

For others looking for this kind of functionality it looks like it can also be achieved via the Lockable Resources Plugin. See this PR for a lead: [https://github.com/jenkinsci/lockable-resources-plugin/pull/50]. This is also pipeline compatible.

lock(label: 'credentials_pool', quantity: 1, variable: 'MYVAR') {
  sh 'echo $MYVAR'
}

executes as 

[Pipeline] {
[Pipeline] lock
Trying to acquire lock on [Label: credentials_pool, Quantity: 1, Variable: MYVAR]
Lock acquired on [Label: credentials_pool, Quantity: 1, Variable: MYVAR]
[Pipeline] {
[Pipeline] sh
[test4-2] Running shell script
+ echo cred1
cred1
[Pipeline] }
Lock released on resource [Label: credentials_pool, Quantity: 1, Variable: MYVAR]
[Pipeline] // lock

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants