-
Notifications
You must be signed in to change notification settings - Fork 118
Docker logging handler #576
base: branch-2.2-kubernetes
Are you sure you want to change the base?
Docker logging handler #576
Conversation
I think the |
Thx for the review @foxish. Any link to the Docker image building change? |
That change isn't in yet - but we did have a discussion about it in today's meeting. It'll likely be started by the work in #521 |
Reading the meeting notes, I realize I had to leave just before you have talked about the PR I have submitted. too bad... I will follow-up also the point about the docker images. For this PR, if the other way to build docker images is implemented quickly, no need to merge, otherwise I think this can help to have a better view (or at least to be less worried as a manual tester) about the long silent pause while running the integration tests. |
@echarles wrote:
Thanks for looking into this. I also observed the same issue, which tests developer patience :-) We recently fixed this in the
Can you try this approach in this PR? If it works, then we would not need the custom logging handler nor println. |
Hello @kimoonkim, thx for the review and suggestion. I have pushed an update which now uses slf4j-log4j without custom handler. In my previous tests, I remember I tried that and came with verbose output like:
Seeing nothing is not fine, seeing too much is not better... An combined solution would be to use |
# Set everything to be logged to the file target/integration-tests.log | ||
log4j.rootCategory=INFO, file | ||
# Set everything to be logged to the file target/integration-tests.log and the console | ||
log4j.rootCategory=INFO, file, stdout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure we want console here. On the upside, I understand most people actually do not know they can check target/integration-tests.log. On the downside, too many log messages could get people lost.
Anyone has a strong opinion on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ping! @echarles or anyone else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point, I am good with this PR pending this question being resolved. Maybe we just want to remove console output part and move on so the PR can be merged?
I would say this is actually better because the long silence is much worse. These messages at least help us debug which image has failed to build.
That sounds good to me, but @foxish were saying we want to retire this scala docker image builder code.
So maybe we should first agree on what we want to do next, and this PR is good enough as is? |
@echarles Thanks for the change! |
@kimoonkim I just made a new push reintroducing the custom LoggingBuildHandler to have a medium level of messages on the console. |
@throws[DockerException] | ||
def progress(message: ProgressMessage) { | ||
if (message.error != null) throw new DockerException(message.toString) | ||
else if (message.status == null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does the null status mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is not much in the spotify source. It just tells
// In 1.8, the message instead looks like
// {"status":"<some-tag>: digest: <digest> size: <size>"}
I came to this trying to find a correct level of output.
def progress(message: ProgressMessage) { | ||
if (message.error != null) throw new DockerException(message.toString) | ||
else if (message.status == null) { | ||
println("build: {}", message) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid using println and use logging API instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just pushed a new commit using log4j
instead of println
.
…-contrib/spark-k8s into docker-logging-handler
The last push takes into account all @kimoonkim comments (not logging anymore to console but to file + added a small scaladoc to make this information available to newcomers). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for writing this change!
Does anyone else want to take a look at this PR?
|
||
trait Logging { | ||
|
||
private val log: Logger = LogManager.getLogger(this.getClass) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Spark already has something similiar in core?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I have seen that also. Does it make sense to follow up with this PR and the apache repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If possible let's reuse what Spark has, the same question will arise when we try to merge back to Spark as well.
What changes were proposed in this pull request?
To better view the progress of the integration test when the docker image are build, this custom handler prints progress.
How was this patch tested?
Run integration test and check you see output like this: