Discussion:
FileNotFoundError thrown due to file name in file, rather than file itself
(too old to reply)
Loris Bennett
2024-11-11 14:05:56 UTC
Permalink
Hi,

I have the following in my program:

try:
logging.config.fileConfig(args.config_file)
config = configparser.ConfigParser()
config.read(args.config_file)
if args.verbose:
print(f"Configuration file: {args.config_file}")
except FileNotFoundError:
print(f"Error: configuration file {args.config_file} not found. Exiting.")
sys.exit(0)

and when I ran the program I got the error

Error: configuration file /usr/local/etc/sc_mailer not found. Exiting.

However, this file *does* exist and *can* be read. By checking the
'filename' attribute of the exception I discovered that the problem was
the log file defined *in* the config file, namely

[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=defaultFormatter
args=('/var/log/my_prog.log', 'a')

This log file did not exist. The exception is thrown by

logging.config.fileConfig(args.config_file)

My questions are:

1. Should I be surprised by this behaviour?
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?

Cheers,

Loris
--
This signature is currently under constuction.
Left Right
2024-11-11 16:04:52 UTC
Permalink
Poor error reporting is a very common problem in programming. Python
is not anything special in this case. Of course, it would've been
better if the error reported what file wasn't found. But, usually
these problems are stacking, like in your code. Unfortunately, it's
your duty, as the language user, to anticipate those problems and act
accordingly. Now you've learned that the one file you believe that
could be the source for the error isn't the only one--well, adjust
your code to differentiate between those two (and potentially other?)
cases. There's very little else you can do beside that.

NB. On the system level, the error has no information about what file
wasn't found. It simply returns some numeric value (the famous
ENOENT) in case when the system call to open a file fails. Python
could've been more helpful by figuring out what path caused the
problem and printing that in the error message, but it doesn't...
That's why I, myself, never use the vanilla FileNotFoundError, I
always re-rise it with a customized version that incorporates the
information about the missing file in the error message.

NB2. It's always a bad idea to print logs to files. Any sysadmin /
ops / infra person worth their salt will tell you that. The only
place the logs should go to is the standard error. There are true and
tried tools that can pick up logs from that point on, and do with them
whatever your heart desires. That is, of course, unless you are
creating system tools for universal log management (in which case, I'd
question the choice of Python as a suitable language for such a task).
Unfortunately, even though this has been common knowledge for decades,
it's still elusive in the world of application development :|

On Mon, Nov 11, 2024 at 4:00 PM Loris Bennett via Python-list
Post by Loris Bennett
Hi,
logging.config.fileConfig(args.config_file)
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Configuration file: {args.config_file}")
print(f"Error: configuration file {args.config_file} not found. Exiting.")
sys.exit(0)
and when I ran the program I got the error
Error: configuration file /usr/local/etc/sc_mailer not found. Exiting.
However, this file *does* exist and *can* be read. By checking the
'filename' attribute of the exception I discovered that the problem was
the log file defined *in* the config file, namely
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=defaultFormatter
args=('/var/log/my_prog.log', 'a')
This log file did not exist. The exception is thrown by
logging.config.fileConfig(args.config_file)
1. Should I be surprised by this behaviour?
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
Cheers,
Loris
--
This signature is currently under constuction.
--
https://mail.python.org/mailman/listinfo/python-list
Loris Bennett
2024-11-12 09:15:47 UTC
Permalink
Post by Left Right
Poor error reporting is a very common problem in programming. Python
is not anything special in this case. Of course, it would've been
better if the error reported what file wasn't found. But, usually
these problems are stacking, like in your code. Unfortunately, it's
your duty, as the language user, to anticipate those problems and act
accordingly. Now you've learned that the one file you believe that
could be the source for the error isn't the only one--well, adjust
your code to differentiate between those two (and potentially other?)
cases. There's very little else you can do beside that.
NB. On the system level, the error has no information about what file
wasn't found. It simply returns some numeric value (the famous
ENOENT) in case when the system call to open a file fails. Python
could've been more helpful by figuring out what path caused the
problem and printing that in the error message, but it doesn't...
That's why I, myself, never use the vanilla FileNotFoundError, I
always re-rise it with a customized version that incorporates the
information about the missing file in the error message.
That sounds like a good idea.
Post by Left Right
NB2. It's always a bad idea to print logs to files. Any sysadmin /
ops / infra person worth their salt will tell you that. The only
place the logs should go to is the standard error. There are true and
tried tools that can pick up logs from that point on, and do with them
whatever your heart desires. That is, of course, unless you are
creating system tools for universal log management (in which case, I'd
question the choice of Python as a suitable language for such a task).
Unfortunately, even though this has been common knowledge for decades,
it's still elusive in the world of application development :|
I am not entirely convinced by NB2. I am, in fact, a sort of sysadmin
person and most of my programs write to a log file. The programs are
also moderately complex, so a single program might access a database,
query an LDAP server, send email etc., so potentially quite a lot can go
wrong. They are also not programs whose output I would pipe to another
command. What would be the advantage of logging to stderr? Quite apart
from that, I find having a log file a useful for debugging when I am
developing.

Cheers,

Loris
Post by Left Right
On Mon, Nov 11, 2024 at 4:00 PM Loris Bennett via Python-list
Post by Loris Bennett
Hi,
logging.config.fileConfig(args.config_file)
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Configuration file: {args.config_file}")
print(f"Error: configuration file {args.config_file} not found. Exiting.")
sys.exit(0)
and when I ran the program I got the error
Error: configuration file /usr/local/etc/sc_mailer not found. Exiting.
However, this file *does* exist and *can* be read. By checking the
'filename' attribute of the exception I discovered that the problem was
the log file defined *in* the config file, namely
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=defaultFormatter
args=('/var/log/my_prog.log', 'a')
This log file did not exist. The exception is thrown by
logging.config.fileConfig(args.config_file)
1. Should I be surprised by this behaviour?
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
Cheers,
Loris
--
This signature is currently under constuction.
--
https://mail.python.org/mailman/listinfo/python-list
--
Dr. Loris Bennett (Herr/Mr)
FUB-IT, Freie Universität Berlin
Left Right
2024-11-12 19:10:55 UTC
Permalink
Post by Loris Bennett
I am not entirely convinced by NB2. I am, in fact, a sort of sysadmin
person and most of my programs write to a log file. The programs are
also moderately complex, so a single program might access a database,
query an LDAP server, send email etc., so potentially quite a lot can go
wrong. They are also not programs whose output I would pipe to another
command. What would be the advantage of logging to stderr? Quite apart
from that, I find having a log file a useful for debugging when I am
developing.
First, the problem with writing to files is that there is no way to
make these logs reliable. This is what I mean by saying these are
unreliable: since logs are designed to grow indefinitely, the natural
response to this design property is log rotation. But, it's
impossible to reliably rotate a log file. There's always a chance
that during the rotation some log entries will be written to the file
past the point of rotation, but prior to the point where the next logs
volume starts.

There are similar reliability problems with writing to Unix or
Internet sockets, databases etc. For different reasons, but at the
end of the day, whoever wants logs, they want them to be reliable.
Both simplicity and convention selected for stderr as the only and the
best source of logging output.

Programs that write their output to log files will always irritate
their users because users will have to do some detective work to
figure out where those files are, and in some cases they will have to
do administrative works to make sure that the location where the
program wants to store the log files is accessible, has enough free
space, is speedy enough etc. So, from the ops perspective, whenever I
come across a program that tries to write logs to anything other than
stderr, I make an earnest effort to throw that program into the gutter
and never touch it again. It's too much headache to babysit every
such program, to remember the location of the log files of every such
program, the required permissions, to provision storage. If you are
in that line of work, you just want all logs to go to the same place
(journal), where you can later filter / aggregate / correlate and
perform other BI tasks as your heart desires.

Of course, if you only administer your own computer, and you have low
single digits programs to run, and their behavior doesn't change
frequently, and you don't care to drop some records every now and
then... it's OK to log to files directly from a program. But then you
aren't really in the sysadmin / infra / ops category, as you are more
of a hobby enthusiast.

Finally, if you want your logs to go to a file, and currently, your
only option is stderr, your shell gives you a really, really simple
way of redirecting stderr to a file. So, really, there aren't any
excuses to do that.
Greg Ewing
2024-11-13 01:04:33 UTC
Permalink
Post by Left Right
since logs are designed to grow indefinitely, the natural
response to this design property is log rotation.
I don't see how writing logs to stderr solves that problem in any way.
Whatever stderr is sent to still has a potentially unlimited amount
of data to deal with.
Post by Left Right
But, it's
impossible to reliably rotate a log file. There's always a chance
that during the rotation some log entries will be written to the file
past the point of rotation, but prior to the point where the next logs
volume starts.
Not sure I follow you there. You seem to be thinking of a particular
way of rotating log files, where an external process tries to swap
the program's log file out from under it without its knowledge. That
could be vulnerable to race conditions. But if the program doing the
logging handles the rotation itself, there's no reason it has to
lose data.
--
Greg
Lawrence D'Oliveiro
2024-11-13 02:13:11 UTC
Permalink
You seem to be thinking of a particular way
of rotating log files, where an external process tries to swap the
program's log file out from under it without its knowledge. That could
be vulnerable to race conditions.
But if you just use standard system facilities, like syslog or the systemd
journal, that is automatically handled for you.
Mats Wichmann
2024-11-12 20:28:04 UTC
Permalink
Post by Left Right
Finally, if you want your logs to go to a file, and currently, your
only option is stderr, your shell gives you a really, really simple
way of redirecting stderr to a file. So, really, there aren't any
excuses to do that.
an awful lot of the programs that need to do extensive logging don't run
under control of a shell, and thus "shell redirection" helps not at all.
Chris Angelico
2024-11-12 20:34:10 UTC
Permalink
On Wed, 13 Nov 2024 at 07:29, Mats Wichmann via Python-list
Post by Mats Wichmann
Post by Left Right
Finally, if you want your logs to go to a file, and currently, your
only option is stderr, your shell gives you a really, really simple
way of redirecting stderr to a file. So, really, there aren't any
excuses to do that.
an awful lot of the programs that need to do extensive logging don't run
under control of a shell, and thus "shell redirection" helps not at all.
Redirection's still a thing. "Shell redirection" is just the shell
syntax to request redirection.

ChrisA
Loris Bennett
2024-11-13 07:11:01 UTC
Permalink
Post by Left Right
Post by Loris Bennett
I am not entirely convinced by NB2. I am, in fact, a sort of sysadmin
person and most of my programs write to a log file. The programs are
also moderately complex, so a single program might access a database,
query an LDAP server, send email etc., so potentially quite a lot can go
wrong. They are also not programs whose output I would pipe to another
command. What would be the advantage of logging to stderr? Quite apart
from that, I find having a log file a useful for debugging when I am
developing.
First, the problem with writing to files is that there is no way to
make these logs reliable. This is what I mean by saying these are
unreliable: since logs are designed to grow indefinitely, the natural
response to this design property is log rotation. But, it's
impossible to reliably rotate a log file. There's always a chance
that during the rotation some log entries will be written to the file
past the point of rotation, but prior to the point where the next logs
volume starts.
There are similar reliability problems with writing to Unix or
Internet sockets, databases etc. For different reasons, but at the
end of the day, whoever wants logs, they want them to be reliable.
Both simplicity and convention selected for stderr as the only and the
best source of logging output.
If I understand correctly you are not saying that logrotate is less
reliable that the other methods mentioned above. But in that case,
reliability seems no more of a reason not to log to files than it is a
reason not to write to a socket or to a database.
Post by Left Right
Programs that write their output to log files will always irritate
their users because users will have to do some detective work to
figure out where those files are, and in some cases they will have to
do administrative works to make sure that the location where the
program wants to store the log files is accessible, has enough free
space, is speedy enough etc.
All your points regarding the work involved are valid, but many
programs, such as MariaDB, OpenLDAP or SSSD, do write to a log file (and
it is usually under /var/log or /var/log/something. So it seems like a
common approach.

Besides, I define the location of the logfile in the config file for the
program (the problem in my original question arose from this filename in
the config file not existing). So finding the location is not an issue.
You have to find the config file, of course, but I think /etc or
/usr/local/etc are fairly standard and my programs generally have an
option '--config-file' anyway.
Post by Left Right
So, from the ops perspective, whenever I
come across a program that tries to write logs to anything other than
stderr, I make an earnest effort to throw that program into the gutter
and never touch it again. It's too much headache to babysit every
such program, to remember the location of the log files of every such
program, the required permissions, to provision storage. If you are
in that line of work, you just want all logs to go to the same place
(journal), where you can later filter / aggregate / correlate and
perform other BI tasks as your heart desires.
That may be true in many cases, but those I am dealing with don't
require much filtering beyond 'grep' and also don't require aggregation
or correlation.
Post by Left Right
Of course, if you only administer your own computer, and you have low
single digits programs to run, and their behavior doesn't change
frequently, and you don't care to drop some records every now and
then... it's OK to log to files directly from a program. But then you
aren't really in the sysadmin / infra / ops category, as you are more
of a hobby enthusiast.
What I do is indeed a bit of a niche, but I do get paid for this, so I
would not consider myself a 'hobby enthusiast'.
Post by Left Right
Finally, if you want your logs to go to a file, and currently, your
only option is stderr, your shell gives you a really, really simple
way of redirecting stderr to a file. So, really, there aren't any
excuses to do that.
I don't quite understand what your suggestion is. Do you mean that I
should log to stderr and then run my program as

my_program ... 2>&1 | logger

?

Cheers,

Loris
--
This signature is currently under constuction.
Barry
2024-11-14 16:01:42 UTC
Permalink
Post by Loris Bennett
I don't quite understand what your suggestion is. Do you mean that I
should log to stderr and then run my program as
my_program ... 2>&1 | logger
On almost all Linux distros you would run a long running program as a systemd service and let it put logs into the journal. I wonder if that was what was being hinted at?

Barry
Roel Schroeven
2024-11-13 09:12:07 UTC
Permalink
Post by Left Right
Post by Loris Bennett
I am not entirely convinced by NB2. I am, in fact, a sort of sysadmin
person and most of my programs write to a log file. The programs are
also moderately complex, so a single program might access a database,
query an LDAP server, send email etc., so potentially quite a lot can go
wrong. They are also not programs whose output I would pipe to another
command. What would be the advantage of logging to stderr? Quite apart
from that, I find having a log file a useful for debugging when I am
developing.
First, the problem with writing to files is that there is no way to
make these logs reliable. This is what I mean by saying these are
unreliable: since logs are designed to grow indefinitely, the natural
response to this design property is log rotation. But, it's
impossible to reliably rotate a log file. There's always a chance
that during the rotation some log entries will be written to the file
past the point of rotation, but prior to the point where the next logs
volume starts.
What I most often do is use one logfile per day, with the date in the
filename. Then simply delete all files older than 7 days, or 30 days, or
whatever is useful for the task at hand. Not only does that sidestep any
issues with rotating logs, but I also find it's very useful to have the
date in the filename.
Post by Left Right
Of course, if you only administer your own computer, and you have low
single digits programs to run, and their behavior doesn't change
frequently, and you don't care to drop some records every now and
then... it's OK to log to files directly from a program. But then you
aren't really in the sysadmin / infra / ops category, as you are more
of a hobby enthusiast.
I would not use my scheme for something released to a wider audience.
For in-house software though, I like that I can easily put each
application's logs next to its other data files, and that I don't have
to figure out how to get the system's own log infrastructure to work is
I want it to.
Post by Left Right
Finally, if you want your logs to go to a file, and currently, your
only option is stderr, your shell gives you a really, really simple
way of redirecting stderr to a file.
I feel this is the worst of both worlds. Now your program doesn't have
any control over filename or log expiration, and neither does your
system's logging infrastructure. You just get one indefinitely growing
log file.
--
"You can fool some of the people all the time, and all of the people some
of the time, but you cannot fool all of the people all of the time."
-- Abraham Lincoln
"You can fool too many of the people too much of the time."
-- James Thurber
Michael Torrie
2024-11-14 04:07:56 UTC
Permalink
Post by Left Right
But, it's
impossible to reliably rotate a log file. There's always a chance
that during the rotation some log entries will be written to the file
past the point of rotation, but prior to the point where the next logs
volume starts.
On any Unix system this is untrue. Rotating a log file is quite simple:
simply rename the log file, then send a signal to the process to close
the log file handle and open a new one. After that perhaps compress the
rotated log file. Nothing is lost. This is standard practice in Unix.
It is reliable.

Perhaps the scenario you posit would happen on Windows.
Left Right
2024-11-14 07:03:32 UTC
Permalink
I realized I posted this without cc'ing the list:
http://jdebp.info/FGA/do-not-use-logrotate.html .

The link above gives a more detailed description of why log rotation
on the Unix system is not only not simple, but is, in fact,
unreliable.

NB. Also, it really rubs me the wrong way when the word "standard" is
used to mean "common" (instead of "as described in a standard
document"). And when it comes to popular tools, oftentimes "common"
is wrong because commonly the tool is used by amateurs rather than
experts. In other words, you only reinforced what I wrote initially:
plenty of application developers don't know how to do logging well.
It also appears that they would lecture infra / ops people on how to
do something that they aren't experts on, while the latter are :)
Chris Angelico
2024-11-14 08:13:54 UTC
Permalink
On Thu, 14 Nov 2024 at 18:05, Left Right via Python-list
Post by Left Right
http://jdebp.info/FGA/do-not-use-logrotate.html .
The link above gives a more detailed description of why log rotation
on the Unix system is not only not simple, but is, in fact,
unreliable.
You're assuming a very specific tool here. Log rotation isn't
necessarily performed by that one tool. There are many ways to do it.

Log to stderr. That puts the power in the hands of the sysadmin,
rather than forcing trickery like setting the log file name to be
/proc/self/fd/2 to get around it.

ChrisA
D'Arcy Cain
2024-11-13 13:37:47 UTC
Permalink
Post by Roel Schroeven
What I most often do is use one logfile per day, with the date in the
filename. Then simply delete all files older than 7 days, or 30 days, or
whatever is useful for the task at hand. Not only does that sidestep any
issues with rotating logs, but I also find it's very useful to have the
date in the filename.
I do something similar for my system logs except that I let the system
use the usual names and, at midnight, I rename the file appending the
previous day's date to it and restarting services.
--
D'Arcy J.M. Cain
Vybe Networks Inc.
http://www.VybeNetworks.com/
IM:***@VybeNetworks.com VoIP: sip:***@VybeNetworks.com
Ethan Furman
2024-11-14 17:32:32 UTC
Permalink
Post by Left Right
http://jdebp.info/FGA/do-not-use-logrotate.html .
The link above gives a more detailed description of why log rotation
on the Unix system is not only not simple, but is, in fact,
unreliable.
Having read the linked article, I see it is not relevant to Python, as Python's logging tool is
the writer/rotator program, thus no window for lost entries exists.
Post by Left Right
NB. Also, it really rubs me the wrong way when the word "standard" is
used to mean "common" (instead of "as described in a standard
document").
Yes, that is irritating.
Post by Left Right
And when it comes to popular tools, oftentimes "common"
is wrong because commonly the tool is used by amateurs rather than
plenty of application developers don't know how to do logging well.
It also appears that they would lecture infra / ops people on how to
do something that they aren't experts on, while the latter are :)
Well, since this is a Python list, perhaps you could make sure your advice is also Python appropriate. I appreciate
diversions into general areas and learning new things, but your general claims were untrue when it comes to Python
specifically, and that was unclear until I read your linked post.

--
~Ethan~
Michael Torrie
2024-11-14 15:44:56 UTC
Permalink
Post by Left Right
http://jdebp.info/FGA/do-not-use-logrotate.html .
The link above gives a more detailed description of why log rotation
on the Unix system is not only not simple, but is, in fact,
unreliable.
Nothing in that article contradicts what I said about renaming log
files. His argument is that renaming log files messes with tail -F, and
therefore broken and unreliable. Which a pretty strange argument. tail
-F might not see some data during the rotation, but the log files
themselves don't miss anything, which was my contention. In all my
years of sysadmin-ing I have never once worried about problems GNU tail
might have with a file that gets rotated out from under you. Not sure
why the author is so fixated on it.

There are actual legitimate issues at play, such as a mechanism for
informing the process to close the file (rotate usually requires
processes to respond to SIGHUP). And of course the disk can fill up
causing a denial of service of one kind or another. The latter is the
biggest source of problems.

Of course you could just log using the standard libc syslog facility.
Or better yet, start your process from a systemd unit file and let the
journal automatically log stderr. In both cases that would satisfy the
technical objections of the author of that little treatise.
Jon Ribbens
2024-11-14 18:12:28 UTC
Permalink
Post by Michael Torrie
Post by Left Right
http://jdebp.info/FGA/do-not-use-logrotate.html .
The link above gives a more detailed description of why log rotation
on the Unix system is not only not simple, but is, in fact,
unreliable.
Nothing in that article contradicts what I said about renaming log
files. His argument is that renaming log files messes with tail -F, and
therefore broken and unreliable. Which a pretty strange argument. tail
-F might not see some data during the rotation, but the log files
themselves don't miss anything, which was my contention. In all my
years of sysadmin-ing I have never once worried about problems GNU tail
might have with a file that gets rotated out from under you. Not sure
why the author is so fixated on it.
I really wouldn't worry about anything Jonathan de Boyne Pollard says.
d***@online.de
2024-11-11 17:24:44 UTC
Permalink
Post by Loris Bennett
logging.config.fileConfig(args.config_file)
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Configuration file: {args.config_file}")
print(f"Error: configuration file {args.config_file} not found. Exiting.")
Do not replace full error information (including a traceback)
with your own reduced error message.
If you omit your "try ... except FileNotFoundError`
(or start the `except` clause with a `raise`), you
will learn where in the code the exception has been raised
and likely as well what was not found (Python is quite good
with such error details).
Post by Loris Bennett
...
1. Should I be surprised by this behaviour?
Your code contains a major weakness (see above); thus surprises
are not unlikely.
Post by Loris Bennett
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
You look at the error information provided by Python
(and its library) rather than hiding it.
Lawrence D'Oliveiro
2024-11-11 21:05:27 UTC
Permalink
Post by Loris Bennett
print(f"Error: configuration file {args.config_file} not found.
Exiting.")
sys.exit(0)
and when I ran the program I got the error
Error: configuration file /usr/local/etc/sc_mailer not found.
Exiting.
However, this file *does* exist and *can* be read.
This is your own fault for intercepting the exception and printing out
your own misleading error message. If you had left the exception uncaught,
it would have printed out the right file name.
Cameron Simpson
2024-11-11 21:17:46 UTC
Permalink
Post by d***@online.de
Post by Loris Bennett
logging.config.fileConfig(args.config_file)
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Configuration file: {args.config_file}")
print(f"Error: configuration file {args.config_file} not found. Exiting.")
Do not replace full error information (including a traceback)
with your own reduced error message.
If you omit your "try ... except FileNotFoundError`
(or start the `except` clause with a `raise`), you
will learn where in the code the exception has been raised
and likely as well what was not found (Python is quite good
with such error details).
Actually, file-not-found is pretty well defined - the except action
itself is fine in that regard.

[...]
Post by d***@online.de
Post by Loris Bennett
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
Generally you should put a try/except around the smallest possible piece
of code. So:

config = configparser.ConfigParser()
try:
config.read(args.config_file)
except FileNotFoundError as e:
print(f"Error: configuration file {args.config_file} not found: {e}")

This way you know that the config file was missing.

Cheers,
Cameron Simpson <***@cskk.id.au>
Loris Bennett
2024-11-12 08:52:31 UTC
Permalink
Post by Cameron Simpson
Post by d***@online.de
Post by Loris Bennett
logging.config.fileConfig(args.config_file)
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Configuration file: {args.config_file}")
print(f"Error: configuration file {args.config_file} not found. Exiting.")
Do not replace full error information (including a traceback)
with your own reduced error message.
If you omit your "try ... except FileNotFoundError`
(or start the `except` clause with a `raise`), you
will learn where in the code the exception has been raised
and likely as well what was not found (Python is quite good
with such error details).
Actually, file-not-found is pretty well defined - the except action
itself is fine in that regard.
[...]
Post by d***@online.de
Post by Loris Bennett
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
Generally you should put a try/except around the smallest possible
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Error: configuration file {args.config_file} not found: {e}")
This way you know that the config file was missing.
I appreciate the point you make about the smallest possible piece of
code, although I can imagine that this could potentially create a lot of
try/except clutter and I might just want a single block and then try to
catch various exceptions.

Regarding your example above, if 'missingfile.py' contains the following

import configparser

config = configparser.ConfigParser()

try:
config.read('/foo/bar')
except FileNotFoundError as e:
print(f"Error: configuration file {config_file} not found: {e}")

them

python3 missingfile.py

does not produce an any output for me and so does not seem to be a
reliable way of handling the case where the config file does not exist.

Cheers,

Loris
--
This signature is currently under constuction.
Karsten Hilbert
2024-11-12 17:47:37 UTC
Permalink
Post by Loris Bennett
Regarding your example above, if 'missingfile.py' contains the following
import configparser
config = configparser.ConfigParser()
config.read('/foo/bar')
print(f"Error: configuration file {config_file} not found: {e}")
them
python3 missingfile.py
does not produce an any output for me and so does not seem to be a
reliable way of handling the case where the config file does not exist.
help(config.read)
Help on method read in module configparser:

read(filenames, encoding=None) method of configparser.ConfigParser instance
Read and parse a filename or an iterable of filenames.

Files that cannot be opened are silently ignored; this is
designed so that you can specify an iterable of potential
configuration file locations (e.g. current directory, user's
home directory, systemwide directory), and all existing
configuration files in the iterable will be read. A single
filename may also be given.

Return list of successfully read files.

So, the very fact that it does not return any output AND
returns an empty list is the (intended) way of knowing the
error state.

Karsten
--
GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Rob Cliffe
2024-11-12 21:04:02 UTC
Permalink
Post by Cameron Simpson
Generally you should put a try/except around the smallest possible
piece of code.
That is excellent advice.
Best wishes
Rob Cliffe
Post by Cameron Simpson
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Error: configuration file {args.config_file} not found: {e}")
dn
2024-11-12 01:10:01 UTC
Permalink
Post by Loris Bennett
       logging.config.fileConfig(args.config_file)
       config = configparser.ConfigParser()
       config.read(args.config_file)
           print(f"Configuration file: {args.config_file}")
       print(f"Error: configuration file {args.config_file} not
found.  Exiting.")
1. Should I be surprised by this behaviour?
No. Python has behaved as-programmed.
Post by Loris Bennett
2. In terms of generating a helpful error message, how should one
  distinguish between the config file not existing and the log file not
  existing?
Generally you should put a try/except around the smallest possible piece
    config = configparser.ConfigParser()
        config.read(args.config_file)
{e}")
This way you know that the config file was missing.
Augmenting @Cameron's excellent advice: please research "Separation of
Concerns", eg https://en.wikipedia.org/wiki/Separation_of_concerns
(which overlaps with one interpretation of SOLID's "Single
Responsibility Principle" and the *nix philosophy of "do one thing, and
do it well").

If you were to explain the code-snippet in English (or ...) it has
several parts/concerns:

- configure the log
- instantiate ConfigParser()
- read the env.file
- advise verbosity-setting
- handle file-error
- (and, but not appearing) terminate execution

A block of code (see indentation for rough definition of "block") should
achieve one thing ("concern"). Thus, the advice to separate-out the
file-read and attendant defensive-coding.

This anticipates the problem (2) of distinguishing the subject of any
one error/stack-trace from any others - and, arguably, makes the code
easier to read.
--
Regards,
=dn
Chris Angelico
2024-11-12 01:17:27 UTC
Permalink
On Tue, 12 Nov 2024 at 01:59, Loris Bennett via Python-list
Post by Loris Bennett
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
By looking at the exception's attributes rather than assuming and
hard-coding the path in your message? Or, even better, just let the
exception bubble.

ChrisA
Loris Bennett
2024-11-12 09:00:36 UTC
Permalink
Post by Chris Angelico
On Tue, 12 Nov 2024 at 01:59, Loris Bennett via Python-list
Post by Loris Bennett
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
By looking at the exception's attributes rather than assuming and
hard-coding the path in your message? Or, even better, just let the
exception bubble.
As Dieter also pointed out, I obviously made a mistake in assuming that
I knew what 'FileNotFound' was referring to.

However, it strikes me as not immediately obvious that the logging file
must exist at this point. I can imagine a situation in which I want to
configure a default log file and create it if it missing.

Cheers,

Loris
--
This signature is currently under constuction.
d***@online.de
2024-11-13 18:36:04 UTC
Permalink
Post by Loris Bennett
...
However, it strikes me as not immediately obvious that the logging file
must exist at this point. I can imagine a situation in which I want to
configure a default log file and create it if it missing.
This is what happens usually:
if you open a file with mode `a` or `w`, the file is created
if it does not yet exist.

Thus, a missing log file should not give you the `FileNotFound`
exception.
Look at the exception details: they should tell you what really
was not found (maybe the directory for the logfile).
Kushal Kumaran
2024-11-13 22:40:57 UTC
Permalink
Post by d***@online.de
Post by Loris Bennett
...
However, it strikes me as not immediately obvious that the logging file
must exist at this point. I can imagine a situation in which I want to
configure a default log file and create it if it missing.
if you open a file with mode `a` or `w`, the file is created
if it does not yet exist.
Thus, a missing log file should not give you the `FileNotFound`
exception.
Look at the exception details: they should tell you what really
was not found (maybe the directory for the logfile).
It is possible a directory along the path does not exist.
--
regards,
kushal
Loris Bennett
2024-11-12 09:03:10 UTC
Permalink
Post by Chris Angelico
On Tue, 12 Nov 2024 at 01:59, Loris Bennett via Python-list
Post by Loris Bennett
2. In terms of generating a helpful error message, how should one
distinguish between the config file not existing and the log file not
existing?
By looking at the exception's attributes rather than assuming and
hard-coding the path in your message? Or, even better, just let the
exception bubble.
I didn't consider letting the exception bubble as is the top of the
code for a CLI program. I was hoping to just catch what I thought might
be a common error of the config file being missing.
--
This signature is currently under constuction.
d***@online.de
2024-11-12 16:18:57 UTC
Permalink
Post by Cameron Simpson
Post by d***@online.de
Post by Loris Bennett
logging.config.fileConfig(args.config_file)
config = configparser.ConfigParser()
config.read(args.config_file)
print(f"Configuration file: {args.config_file}")
print(f"Error: configuration file {args.config_file} not found. Exiting.")
Do not replace full error information (including a traceback)
with your own reduced error message.
If you omit your "try ... except FileNotFoundError`
(or start the `except` clause with a `raise`), you
will learn where in the code the exception has been raised
and likely as well what was not found (Python is quite good
with such error details).
Actually, file-not-found is pretty well defined - the except action
itself is fine in that regard.
The original exception likely tells us which file was not found.
Loading...