[ACCEPTED]-Is it safe to parse a /proc/ file?-procfs
In general, no. (So most of the answers here are wrong.) It 75 might be safe, depending on what property you 74 want. But it's easy to end up with bugs 73 in your code if you assume too much about 72 the consistency of a file in /proc
. For example, see 71 this bug which came from assuming that /proc/mounts
was a consistent snapshot.
For example:
/proc/uptime
is totally atomic, as someone mentioned 70 in another answer -- but only since Linux 2.6.30, which is less 69 than two years old. So even this tiny, trivial 68 file was subject to a race condition until 67 then, and still is in most enterprise kernels. See 66fs/proc/uptime.c
for the current source, or the commit that made it atomic. On a pre-2.6.30 65 kernel, you canopen
the file,read
a bit of it, then 64 if you later come back andread
again, the piece 63 you get will be inconsistent with the first 62 piece. (I just demonstrated this -- try 61 it yourself for fun.)/proc/mounts
is atomic within a singleread
system call. So if youread
the 60 whole file all at once, you get a single 59 consistent snapshot of the mount points 58 on the system. However, if you use several 57read
system calls -- and if the file is big, this 56 is exactly what will happen if you use normal 55 I/O libraries and don't pay special attention 54 to this issue -- you will be subject to 53 a race condition. Not only will you not 52 get a consistent snapshot, but mount points 51 which were present before you started and 50 never stopped being present might go missing 49 in what you see. To see that it's atomic 48 for oneread()
, look atm_start()
infs/namespace.c
and see it grab a semaphore 47 that guards the list of mountpoints, which 46 it keeps untilm_stop()
, which is called when the 45read()
is done. To see what can go wrong, see 44 this bug from last year (same one I linked above) in otherwise 43 high-quality software that blithely read 42/proc/mounts
./proc/net/tcp
, which is the one you're actually asking 41 about, is even less consistent than that. It's 40 atomic only within each row of the table. To see this, look atlistening_get_next()
innet/ipv4/tcp_ipv4.c
andestablished_get_next()
just below 39 in the same file, and see the locks they 38 take out on each entry in turn. I don't 37 have repro code handy to demonstrate the 36 lack of consistency from row to row, but 35 there are no locks there (or anything else) that 34 would make it consistent. Which makes sense 33 if you think about it -- networking is often 32 a super-busy part of the system, so it's 31 not worth the overhead to present a consistent 30 view in this diagnostic tool.
The other piece 29 that keeps /proc/net/tcp
atomic within each row is the 28 buffering in seq_read()
, which you can read in fs/seq_file.c
. This 27 ensures that once you read()
part of one row, the 26 text of the whole row is kept in a buffer 25 so that the next read()
will get the rest of that 24 row before starting a new one. The same 23 mechanism is used in /proc/mounts
to keep each row atomic 22 even if you do multiple read()
calls, and it's 21 also the mechanism that /proc/uptime
in newer kernels 20 uses to stay atomic. That mechanism does 19 not buffer the whole file, because the kernel 18 is cautious about memory use.
Most files 17 in /proc
will be at least as consistent as /proc/net/tcp
, with 16 each row a consistent picture of one entry 15 in whatever information they're providing, because 14 most of them use the same seq_file
abstraction. As 13 the /proc/uptime
example illustrates, though, some files 12 were still being migrated to use seq_file
as recently 11 as 2009; I bet there are still some that 10 use older mechanisms and don't have even 9 that level of atomicity. These caveats are 8 rarely documented. For a given file, your 7 only guarantee is to read the source.
In 6 the case of /proc/net/tcp
, you can read it and parse 5 each line without fear. But if you try to 4 draw any conclusions from multiple lines 3 at once -- beware, other processes and the 2 kernel are changing it while you read it, and 1 you are probably creating a bug.
Although the files in /proc
appear as regular 25 files in userspace, they are not really 24 files but rather entities that support the 23 standard file operations from userspace 22 (open
, read
, close
). Note that this is quite different than having an ordinary file on disk that is being changed by the kernel.
All the kernel does is print its 21 internal state into its own memory using 20 a sprintf
-like function, and that memory is copied 19 into userspace whenever you issue a read(2)
system 18 call.
The kernel handles these calls in an 17 entirely different way than for regular 16 files, which could mean that the entire 15 snapshot of the data you will read could 14 be ready at the time you open(2)
it, while the 13 kernel makes sure that concurrent calls 12 are consistent and atomic. I haven't read 11 that anywhere, but it doesn't really make 10 sense to be otherwise.
My advice is to take 9 a look at the implementation of a proc file 8 in your particular Unix flavour. This is 7 really an implementation issue (as is the 6 format and the contents of the output) that 5 is not governed by a standard.
The simplest 4 example would be the implementation of the 3 uptime
proc file in Linux. Note how the entire 2 buffer is produced in the callback function 1 supplied to single_open
.
/proc is a virtual file system : in fact, it 25 just gives a convenient view of the kernel 24 internals. It's definitely safe to read 23 it (that's why it's here) but it's risky 22 on the long term, as the internal of these 21 virtual files may evolve with newer version 20 of kernel.
EDIT
More information available in 19 proc documentation in Linux kernel doc, chapter 1.4 Networking I can't find if 18 the information how the information evolve 17 over time. I thought it was frozen on open, but 16 can't have a definite answer.
EDIT2
According to 15 Sco doc (not linux, but I'm pretty sure all flavours 14 of *nix behave like that)
Although process 13 state and consequently the contents of 12 /proc files can change from instant to instant, a 11 single read(2) of a /proc file is guaranteed 10 to return a ``sane'' representation of 9 state, that is, the read will be an atomic snapshot 8 of the state of the process. No such guarantee 7 applies to successive reads applied to 6 a /proc file for a running process. In addition, atomicity 5 is specifically not guaranteed for any 4 I/O applied to the as (address-space) file; the contents 3 of any process's address space might be 2 concurrently modified by an LWP of that 1 process or any other process in the system.
The procfs API in the Linux kernel provides 12 an interface to make sure that reads return 11 consistent data. Read the comments in __proc_file_read
. Item 10 1) in the big comment block explains this 9 interface.
That being said, it is of course 8 up to the implementation of a specific proc 7 file to use this interface correctly to 6 make sure its returned data is consistent. So, to 5 answer your question: no, the kernel does 4 not guarantee consistency of the proc files 3 during a read but it provides the means 2 for the implementations of those files to 1 provide consistency.
I have the source for Linux 2.6.27.8 handy 25 since I'm doing driver development at the 24 moment on an embedded ARM target.
The file 23 ...linux-2.6.27.8-lpc32xx/net/ipv4/raw.c
at line 934 contains, for example
seq_printf(seq, "%4d: %08X:%04X %08X:%04X"
" %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p %d\n",
i, src, srcp, dest, destp, sp->sk_state,
atomic_read(&sp->sk_wmem_alloc),
atomic_read(&sp->sk_rmem_alloc),
0, 0L, 0, sock_i_uid(sp), 0, sock_i_ino(sp),
atomic_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));
which 22 outputs
[wally@zenetfedora ~]$ cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 017AA8C0:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 15160 1 f552de00 299
1: 00000000:C775 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 13237 1 f552ca00 299
...
in function raw_sock_seq_show()
which is part of a hierarchy 21 of procfs handling functions. The text is not 20 generated until a read()
request is made of the 19 /proc/net/tcp
file, a reasonable mechanism since procfs reads 18 are surely much less common than updating 17 the information.
Some drivers (such as mine) implement 16 the proc_read function with a single sprintf()
. The 15 extra complication in the core drivers implementation 14 is to handle potentially very long output 13 which may not fit in the intermediate, kernel-space 12 buffer during a single read.
I tested that 11 with a program using a 64K read buffer but 10 it results in a kernel space buffer of 3072 9 bytes in my system for proc_read to return 8 data. Multiple calls with advancing pointers 7 are needed to get more than that much text 6 returned. I don't know what the right way 5 to make the returned data consistent when 4 more than one i/o is needed. Certainly 3 each entry in /proc/net/tcp
is self-consistent. There 2 is some likelihood that lines side-by-side 1 are snapshot at different times.
Short of unknown bugs, there are no race 20 conditions in /proc
that would lead to reading 19 corrupted data or a mix of old and new data. In 18 this sense, it's safe. However there's still 17 the race condition that much of the data 16 you read from /proc
is potentially-outdated as 15 soon as it's generated, and even moreso 14 by the time you get to reading/processing 13 it. For instance processes can die at any 12 time and a new process can be assigned the 11 same pid; the only process ids you can ever 10 use without race conditions are your own 9 child processes'. Same goes for network 8 information (open ports, etc.) and really 7 most of the information in /proc
. I would consider 6 it bad and dangerous practice to rely on 5 any data in /proc
being accurate, except data 4 about your own process and potentially its 3 child processes. Of course it may still 2 be useful to present other information from 1 /proc
to the user/admin for informative/logging/etc. purposes.
When you read from a /proc file, the kernel 33 is calling a function which has been registered 32 in advance to be the "read" function for 31 that proc file. See the __proc_file_read
function in fs/proc/generic.c 30 .
Therefore, the safety of the proc read 29 is only as safe as the function the kernel 28 calls to satisfy the read request. If that 27 function properly locks all data it touches 26 and returns to you in a buffer, then it 25 is completely safe to read using that function. Since 24 proc files like the one used for satisfying 23 read requests to /proc/net/tcp have been 22 around for a while and have undergone scrupulous 21 review, they are about as safe as you could 20 ask for. In fact, many common Linux utilities 19 rely on reading from the proc filesystem 18 and formatting the output in a different 17 way. (Off the top of my head, I think 'ps' and 16 'netstat' do this).
As always, you don't 15 have to take my word for it; you can look 14 at the source to calm your fears. The following 13 documentation from proc_net_tcp.txt tells 12 you where the "read" functions for /proc/net/tcp 11 live, so you can look at the actual code 10 that is run when you read from that proc 9 file and verify for yourself that there 8 are no locking hazards.
This document describes 7 the interfaces /proc/net/tcp and /proc/net/tcp6.
Note 6 that these interfaces are deprecated in 5 favor of tcp_diag. These /proc 4 interfaces provide information about currently 3 active TCP connections, and are implemented 2 by tcp4_seq_show() in net/ipv4/tcp_ipv4.c and 1 tcp6_seq_show() in net/ipv6/tcp_ipv6.c, respectively.
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.