0
1
I have set up an azure-pipeline yaml config with a pipeline job that runs 5 scripts. The config for each script has the failOnStderr
flag set to true
. The script runs successfully, however the stage fails with an output of:
##[error]Bash wrote one or more lines to the standard error stream.
I enabled system.debug
for verbosity and got further details as such:
##[debug]Exit code 0 received from tool '/bin/bash'
##[debug]STDIO streams have closed for tool '/bin/bash'
##[error]Bash wrote one or more lines to the standard error stream.
##[debug]Processed: ##vso[task.issue type=error;]Bash wrote one or more lines to the standard error stream.
##[debug]task result: Failed
##[debug]Processed: ##vso[task.complete result=Failed;done=true;]
I have the following questions:
- Why is the exit code of 0 being interpreted as an error?
- Is something else being sent to standard error behind the scenes for w.e. reason?
- What solution/workaround does the community recommend outside of the config offered by azure-pipelines? I could use a
shell
trap
but I was hoping to find something that would reduce boilerplate across the scripts.