-
-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
panic: runtime error: invalid memory address or nil pointer dereference #323
Comments
Thanks for the report, we'll fix it in the next version. |
Have a similar issue: 2021/01/20 10:01:28 [closing] panic recovered: runtime error: invalid memory address or nil pointer dereference
goroutine 10981 [running]:
runtime/debug.Stack(0xc00f494a88, 0x11a12c0, 0x1b31020)
/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
github.com/emitter-io/emitter/internal/broker.(*Conn).Close(0xc00964d680, 0x0, 0x0)
/go-build/src/github.com/emitter-io/emitter/internal/broker/conn.go:353 +0x32d
panic(0x11a12c0, 0x1b31020)
/usr/local/go/src/runtime/panic.go:969 +0x166
github.com/weaveworks/mesh.(*gossipSender).Broadcast(0xc0042cd6d0, 0xbe249f1f5be4, 0x1525020, 0xc00ab7e720)
/go/pkg/mod/github.com/weaveworks/mesh@v0.0.0-20191105120815-58dbcc3e8e63/gossip.go:202 +0xb2
github.com/weaveworks/mesh.(*gossipChannel).relayBroadcast(0xc0000d24c0, 0xbe249f1f5be4, 0x1525020, 0xc00ab7e720)
/go/pkg/mod/github.com/weaveworks/mesh@v0.0.0-20191105120815-58dbcc3e8e63/gossip_channel.go:116 +0x10b
github.com/weaveworks/mesh.(*gossipChannel).GossipBroadcast(0xc0000d24c0, 0x1525020, 0xc00ab7e720)
/go/pkg/mod/github.com/weaveworks/mesh@v0.0.0-20191105120815-58dbcc3e8e63/gossip_channel.go:83 +0x4f
github.com/emitter-io/emitter/internal/service/cluster.(*Swarm).Notify(0xc0003d4000, 0x1532520, 0xc007289300, 0x1)
/go-build/src/github.com/emitter-io/emitter/internal/service/cluster/swarm.go:359 +0xb5
github.com/emitter-io/emitter/internal/broker.(*Conn).onConnect(0xc00964d680, 0xc006e94a00, 0x4)
/go-build/src/github.com/emitter-io/emitter/internal/broker/conn.go:344 +0x1a6
github.com/emitter-io/emitter/internal/broker.(*Conn).onReceive(0xc00964d680, 0x15325e0, 0xc006e94a00, 0x0, 0x0)
/go-build/src/github.com/emitter-io/emitter/internal/broker/conn.go:194 +0x7aa
github.com/emitter-io/emitter/internal/broker.(*Conn).Process(0xc00964d680, 0x0, 0x0)
/go-build/src/github.com/emitter-io/emitter/internal/broker/conn.go:180 +0x1cb
created by github.com/emitter-io/emitter/internal/broker.(*Service).onAcceptConn
/go-build/src/github.com/emitter-io/emitter/internal/broker/service.go:315 +0x6e |
so?, what to do finally? |
Scale at only one broker seems to work... |
Still an issue on 3.0
|
@Florimond still an issue in 3.1
|
@L3o-pold What I thought, is that it was an issue when forming a cluster, issue that I fixed. And to test it I followed this procedure... What I do is that I launch a server in vscode with the following config:
And I launch a docker of the same version (v3.1) of emitter with:
I have the following message on the vscode Emitter when it starts:
And the following message in the docker logs:
Then I use the little sample that comes with the python SDK to test the publish on a channel when connect to either instance. Now, if the issue reported here is different, do you mind clarifying for me, as I'm a bit lost? Thanks a lot. |
We found a problem in use. The reason for the error is as follows. It should be the Close() function of the connection triggered by the write-down.
I'm running a docker Hub lastest version. I don't know if the latest version has this problem, but it could cause a service outage!
The text was updated successfully, but these errors were encountered: