Client - Server version number matching

I am looking for people’s preferences on how we keep the version numbers in sync (or not). We currently do versioning like 1.2.3 where those correspond to something like major.minor.bug fix. My question is what should we do when we change just the server side OR the client side code. e.g. if we had

client side = v1.2.0
server side = v1.2.0

and we fix a bug on the client side so it moves to v1.2.1 then should we bump up the server side version number so they match?

Same question for if we add a minor feature update on the client to say 1.3.0 - should we bump up the server version to 1.3.0 too.

Or, is no one else thinking about version numbers in this way and no one cares?



We thought about that with Amadou a while ago… and that is why the DataSHIELD configuration reported by datashield.methods() includes the name of the package and the version of it. This way a client can verify that its server-side counterpart has at least the version x.y.z. Then no need to artificially increment the version number of the server when a client is upgraded (in addition to that it is always painful to deploy new server packages on all the nodes).


I definitely agree that avoiding extra needless server updates makes a lot of sense!

So assuming we allow the different version numbers to diverge how would we manage breaking changes? e.g. if we had

client side = 1.2.3
server side = 1.4.5

and a client side upgrade to 1.3.0 required a server side version 1.5.0 or above. Do we have a mechanism to enforce that requirement? I don’t mean force the upgrade, but stop a function running and let the user know there are incompatible versions?



There is currently no ready-to-use function for package version checking. I can add one in DSI, something like:

datashield.pkg_check(conns, name, version)


  • conns the DSConnection objects (also known as opals or datasources)
  • name is the name of the server side package name
  • version is the minimum version number to match

for example in a DS client function call:

datashield.pkg_check(datasources, "dsModelling", "5.0.0")

and the behavior could be to stop when one server has not the minimum required version.

As the package version retrieval from each server is quite costly, the result of the requests could be cached in a hidden variable (cleaned on datashield.login). That would speed up the version checks in client functions.


I would be good if we could move to having matching ‘major’ and ‘minor’ for client and server sides packages. With ‘bug’ increasing if a behavioural change has occurred. So ‘bug’ doesn’t change for solely testing or documentation changes. In this case a “build number” would be useful.

The logical conclusion (possibly) is that we should be increasing ‘dsBetaTest’ and ‘dsBetaTestClient’ ‘minor’ when new function are added.


The more I think about this the more confused I get! There are three different scenarios that I can think of:

  • Client side (CS) updated with a new feature, but Server side (SS) not updated. Here we’d be confident that the new CS works with the current SS release, but we wouldn’t have tested against earlier SS releases. So the CS would have to put a SS>=VX.X.X in it.

  • SS updated but not CS. The inverse of the above, so the server side would have to mandate a CS version >=VX.X.X

  • CS and SS both updated. Both of the above. Although since a version number isn’t given until the release is made this gets a little awkward.

So the server would have to report which version it currently has, and which minimum CS version it expects to work with. The CS function would then have to check.

I am also assuming versioning at the package level, not the function level.

Am I over complicating this?

Yes, too much complexity. We can assume that the client is using the latest version (if not, it is easy to upgrade) and then the check is only whether a new or a modified (in a non backward compatible way) function is supported by the server (=server version check). The responsibility of the DS developer is also to guarantee a minimum of stability in the function behavior. It’s better to create a new function and deprecate an old one instead of introducing a non-backward compatible change. That’s the approach we have at OBiBa with our application web services and it works.


I have implemented the datashield.pkg_check function in DSI so that you can verify if it covers your needs.

The output looks like this:

> datashield.pkg_check(conns, "dsBase", "4.1")
Error: Package dsBase on server study1 has not the minimum version required: 4.0.0 < 4.1
> datashield.pkg_check(conns, "dsBase", "4.0")
> datashield.pkg_check(conns, "dsStats", "4.0")
Error: Package dsStats is not installed on server study4. Minimum version required is 4.0
> datashield.pkg_check(conns, "dsDummy", "4.1")
Error: Package dsDummy is not installed on any of the servers. Minimum version required is 4.1

The package version / server matrix is cached and this cache is cleared on login, logout and when switching the default connections (equivalent to Paul’s “default opals”).


I could be beneficial try to keep all ‘major’ version number across all packages the same.


Can we return the version of a package when we load the package?

For example when I load the mvmeta package in R, I get the following information:

> library(mvmeta)

This is mvmeta 0.4.11. For an overview type: help('mvmeta-package').

Something similar will be useful in DataSHIELD, at least for the client packages.


You are correct that is useful and important. Yannick has implemented something similar in Opal technologies.


1 Like