Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Action fails even if fail_on_error is false #89

Open
jeromecambon opened this issue Nov 25, 2022 · 11 comments
Open

Action fails even if fail_on_error is false #89

jeromecambon opened this issue Nov 25, 2022 · 11 comments
Labels
bug Something isn't working

Comments

@jeromecambon
Copy link

jeromecambon commented Nov 25, 2022

I'm giving the vale-action the following parameters:

  • fail_on_error: false
  • vale_flags: "--no-exit"

However, I still have the action to fail if reviewdog fails:

/bin/reviewdog -f=rdjsonl -name=vale -reporter=github-pr-check -fail-on-error=false -filter-mode=added -level=info
...
Error: reviewdog: Too many results (annotations) in diff.
  You may miss some annotations due to GitHub limitation for annotation created by logging command.
  Please check GitHub Actions log console to see all results.
...
Error: reviewdog exited with status code: 1

Whatever reviewdog is returning, Vale action should not fail if fail_on_error is false.

@jeromecambon
Copy link
Author

jeromecambon commented Nov 25, 2022

Any workaround for this? Thanks!

@jeromecambon
Copy link
Author

jeromecambon commented Nov 25, 2022

I have a fix for this, but don't have the rights to create a PR.

@michelle-purcell
Copy link

michelle-purcell commented Nov 30, 2022

@jeromecambon - Thank you for raising this issue. I am also reproducing this problem too in a GitHub action.

The issue occurs regardless of whether I use actions/checkout@v3 or uses: actions/checkout@v2.

Here's an extract of the YAML configuration that reproduces the issue:


jobs:
  vale:
    name: Linting with Vale
    runs-on: ubuntu-latest
    permissions:
      actions: read
      checks: read
      contents: read
      pull-requests: read
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Vale Linter
        uses: errata-ai/vale-action@reviewdog
        with:
          fail_on_error: false
          vale_flags: "--no-exit --config=docs/.vale/vale.ini"
          filter_mode: diff_context
          files: docs/src/main/asciidoc/
        env:
          GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

jdkato - Some context: We are implementing Vale as a pilot test in one project, and during the initial introduction of the Vale inter to the git workflow, we would like to ensure that a PR is not blocked, regardless of whether there are errors, warnings, or suggestions. Is there a workaround for this issue to prevent the build failure? Thank you.

@ElPaisano
Copy link

ElPaisano commented Dec 1, 2022

Hi all, we've recently been working on a vale gh action install in our docs as well, and were running into the same issue described here.

I may have found a temporary workaround:

Theory:
This error

Error: reviewdog: Too many results (annotations) in diff.
  You may miss some annotations due to GitHub limitation for annotation created by logging command.
  Please check GitHub Actions log console to see all results.

doesn't have anything to with vale or the vale config itself. Rather, in each of these cases, Vale is spitting out more annotations than the the allowed limit for GH actions, which is causing it fail, regardless of other configs like fail_on_error: false and vale_flags: "--no-exit".

Tests:
After looking over this thread, I made a few changes to our docs vale GH action config, partially inspired by @jeromecambon (Your file diff step was helpful, thank you!!!! 😄 ). The short summary of what I did was:

  • Get the file diff first using jitterbit/get-changed-files@v1, and only have vale run on the diff, as opposed to the entire file and entire doc set. This is one way to limit the number of annotations.
  • In the .yml file for the action, removed the --no-exit flag and reset fail_on_error to true because, in our case, we do want the action to fail if an ERROR is found
  • Lastly, I told the vale GH action to only report issues of severity level ERROR and disregard WARNING and SUGGESTION (vale_flags: "--minAlertLevel=error"). In the contributor workflow I'm proposing, the contributor should have already run vale locally with a full config i.e. displaying ERROR, WARNING and SUGGESTION issues, and fixed those issues prior to commit and push. So, the GH action is just a gatekeeper at this point, only checking for the worst issues. This is another way to limit the number of annotations

Result:
Ran a few tests (spelling errors, grammar errors) and the action failed as expected. Removed the errors, and the action continued to work properly, and did not fail with the error described in this thread.

Conclusion:
In summary, the workaround was to limit the number of annotations to keep them below the GH action limit. So, depending on your workflow (run vale locally first with a full config, use the GH action as a gatekeeper for the worst offenders), this might sort of work as a solution

Further investigations:

@ElPaisano
Copy link

I'd be curious to see if this workaround works for other folks, and also hear @jdkato 's take on this.

PS I absolutely love vale, this is such a cool tool

@jeromecambon
Copy link
Author

Thank you @ElPaisano for your update 😃
Not sure it would help me, since I get the issue on a single big file with many annotations.
Using jitterbit/get-changed-files@v1 one can only get a list of files, not the diff done by the PR. So I'm not sure to understand what you mean by "only have vale run on the diff, as opposed to the entire file"?
Vale is running on the entire file anyway, isn't it?

@jdkato
Copy link
Member

jdkato commented Dec 8, 2022

@ElPaisano, thanks for looking into this.

I think you're right about this case: the Too many results error is a case in which the action probably should fail because the error shouldn't happen (unlike the action ignoring an error-level alert from Vale, which is a preference thing).

The question then becomes how do we avoid this, even for repos that have a lot of alerts to show? I don't have a ton of time currently to look into this, but some ideas I'd like to pursue are:

@jdkato jdkato added the bug Something isn't working label Dec 8, 2022
@jeromecambon
Copy link
Author

jeromecambon commented Jan 6, 2023

@jdkato Any news on this one? Thanks!

@jdkato
Copy link
Member

jdkato commented Jan 16, 2023

A new release should be out sometime next week.

@jdkato
Copy link
Member

jdkato commented Jan 28, 2023

Sorry for the delay here -- I haven't had time to get the release finished yet. The good news is that a combination of changes (filtering order, moving away from Docker, etc.) have resulted in about a 50% improvement in performance.

@rmoff
Copy link
Contributor

rmoff commented Jun 3, 2024

@jdkato is this something you're still planning to release a fix for? thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants