Docker 3.2.2 gives error

When updating my docker container from 3.1.5 to 3.2.2 and start it again its not adopted by the cluster.
I get the error:
/docker-entrypoint.sh: line 24: exec: ‘-Cnetwork.host=0.0.0.0’: not found

Now tried to start a new container pointing at the same data and the container is starting but gives me now this error:

[2019-01-31T10:30:42,177][ERROR][i.c.e.r.s.c.c.TablesNeedUpgradeSysCheck] [Wildhorn] error while checking for tables that need upgrade

org.elasticsearch.transport.ActionNotFoundTransportException: No handler for action [internal:crate:sql/job]

	at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1497) [crate-app-3.2.2.jar:3.2.2]

	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1380) [crate-app-3.2.2.jar:3.2.2]

	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:64) [crate-app-3.2.2.jar:3.2.2]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-codec-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:528) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:482) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) [netty-transport-4.1.31.Final.jar:4.1.31.Final]

	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.31.Final.jar:4.1.31.Final]

	at java.lang.Thread.run(Thread.java:748) [?:?]

Hmm putting it back to 3.1.5 gives me problems now too. Crap!
How can i solve this without loosing my data?

Caused by: java.io.IOException: failed to read [id:130, legacy:false, file:/data/data/nodes/0/indices/r0ngdcDaR7a-1sDfyZR-cA/_state/state-130.st]

	at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:327) ~[crate-app-3.1.5.jar:3.1.5]

	at org.elasticsearch.common.util.IndexFolderUpgrader.upgrade(IndexFolderUpgrader.java:90) ~[crate-app-3.1.5.jar:3.1.5]

	at org.elasticsearch.common.util.IndexFolderUpgrader.upgradeIndicesIfNeeded(IndexFolderUpgrader.java:128) ~[crate-app-3.1.5.jar:3.1.5]

	at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:87) ~[crate-app-3.1.5.jar:3.1.5]

	at org.elasticsearch.node.Node.<init>(Node.java:397) ~[crate-app-3.1.5.jar:3.1.5]

	at io.crate.node.CrateNode.<init>(CrateNode.java:66) ~[crate-app-3.1.5.jar:3.1.5]

	at org.elasticsearch.bootstrap.BootstrapProxy$1.<init>(BootstrapProxy.java:202) ~[crate-app-3.1.5.jar:3.1.5]

	at org.elasticsearch.bootstrap.BootstrapProxy.setup(BootstrapProxy.java:202) ~[crate-app-3.1.5.jar:3.1.5]

	at org.elasticsearch.bootstrap.BootstrapProxy.init(BootstrapProxy.java:267) ~[crate-app-3.1.5.jar:3.1.5]

	at io.crate.bootstrap.CrateDB.init(CrateDB.java:155) ~[crate-app-3.1.5.jar:3.1.5]

I have one table that is setup with 1 replica on a 3 node cluster.
Now 1 node is down and i see most tables as warning syncing data.
But one table is critical and missing data. This shouldn’t be possible with a 1 replica table setup on a 3 node cluster right?