Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
Pass the correct pointer to be freed.
|
|
|
|
Use g_malloc0 to allocate memory in our lemon-generated parsers. This
avoids some scan-build warnings.
|
|
|
|
Expand the comment before file_getsp(), and add one before file_gets(),
indicating what it does.
Change the variables in file_getsp() so that buf always points to the
beginning of the buffer and a curp variable points to the *current*
position in the buffer, rather than using buf as that pointer.
|
|
The loop that processes the header comments can either terminate on an
EOF or error on on seeing a non-comment line. Set linep on each trip
through the loop, so that, if it's null, we know the loop was terminated
due to an error or EOF; otherwise, it was terminated by a non-comment
line, which is in the line buffer. Only parse the line buffer if
there's a non-comment line in the buffer.
Include the test for a comment line in the loop test.
|
|
Don't check for errors if file_gets() *succeeds*, check if it *fails*,
i.e. do the check after the loop is done.
To do that, call file_error() to get the error code, rather than
assuming *err magically gets set.
If HeaderMapping->parseFunc() fails, treat that as a "not our file"
condition rather than an error condition.
|
|
If commview_ncfx_read_header() returns WTAP_ERR_BAD_FILE, it doesn't
mean we got an error, it means it's probably not a Commview NCFX file.
Check for that and, if we get it, free the error message and return
WTAP_OPEN_NOT_MINE.
|
|
Sadly, some people created modified versions of pcap format that use
*the standard pcap magic number* but that aren't readable by code that
reads the standard format, requiring heuristics to figure out which
format is being used.
The heuristics try to read the file in all formats that use the magic
number in question, and choose the format that has the fewest problems
when that's done.
The old heuristics were a bit too eager to treat damaged regular pcap
files as "Red Hat 6.1" pcap files, as they looked less damaged (or
perhaps not damaged at all) when read in the latter format.
Redo the heuristic tests by:
* reading the packets with special code that reads the packet record
headers one field at a time, applying "does this look wrong?" tests as
soon as we have read enough from the header to apply the test, and
treating a short read as a "does this look wrong?" answer of "yes, it
does";
* doing "does this look wrong?" tests of fields that are not part of the
standard pcap packet header but are part of modified pcap headers;
* attempting to "read" the packet data (skipping over it and discarding
the data) as long as the captured length isn't larger than the maximum
we support;
* not applying the heuristics to pcap file version numbers other than
2.4.
(And, yes, even though the patches to Linux libpcap that caused some of
these problematic file formats were done back in 1999, at least one
capture file in one of those formats was attached to a bug filed within
the past 2 years or so - see issue #19318 - so the ability to handle
those formats appears still to be useful to at least some users.)
|
|
The resolution on Ixia hardware-capture lcap files appears to be higher
than 1 microsecond; assume it's 1 nanosecond.
Also:
Make a Boolean variable a bool rather than an int. Rename it to
skip_ixia_extra, to clarify that it's for the Ixia formats.
Use a null buffer argument to wtap_read_bytes() to skip over the size
information.
Point to the issue for adding support for Ixia lcap files.
Ping #14073.
|
|
We're limited to 4 .11be user slots. Enforce that.
Fixes #20019
|
|
It returns true or false, so make it bool, not int.
|
|
lz4frame.h says that LZ4F_headerSize() is provided in v1.9.0+, but
careful perusal of the git repo shows that it wasn't exported to
the dynamic library until 1.9.3. Ubuntu Focal (20.04) has a package
based on 1.9.2 and needs this to compile.
|
|
Handle files with concatenated heterogeneous compressed frames
correctly. We can't always avoid doing a fast seek if the jump
is small if we're changing compression types.
There are some edge cases where it might be faster to avoid
the fast seek if it's close to our current position (with
zlib), but there's enough exceptions to that that it's easier
to just check if the fast seek point is inside the current
buffer (which always works, and handles most of those cases.)
Do a very limited fast seek to the beginning of zstd frames,
rather than seeking to the beginning of a file. Most zstd compressed
files will be a single frame but this is faster on any multiframe
file (e.g. concatenated files, which works with pcapng), and also
necessary for heterogeneous compressed frames concatenated.
|
|
Note EIGRP never actually sets the "unreachable" boolean and thus
the associated expert warning is never added; presumably that was
lost at some point.
One exception: The PDCP dissectors have an enum preference for sequence
analysis that was being initialized to true. Use the value defined to
be equal to 1 (RLC only) so that it doesn't change.
Fix #15770
|
|
|
|
There were a few places that used gboolean FALSE
in places where a pointer was expected to mean NULL.
gboolean FALSE being an int 0, this worked.
Both gcc and clang will error if compiled in C23 mode with
this (in C99 until C23, false is a macro constant that
expands to the integer constant 0; false has type int so
this is still legal; in C23 false has type bool and gcc
and clang warn about implicit conversions of bool to
pointers.)
Replace the calls with NULL.
Ping #19116
|
|
Check the compression suffix earlier, and produce errors for
compression suffixes (and compression types) we recognize and
can read, but can't write.
Make sure we check for a suffix after a dot (the function
that returns all the extension types doesn't include the dot,
so do it a different way here and in the other functions.)
For now, we don't allow specifying uncompressed with --compress
(to override a compression extension, perhaps), but we could in
the future, by initializing to unknown compression and changing
to uncompressed immediately before checking the type.
Fix name in stderr message.
Allow writing compressed information to standard out for piping.
Handle double suffixes for ring buffer names, putting the changing
part before the previous extension.
We can write LZ4 compressed now, so indicate that. Thanks to the
other commits turning it on just works.
Update the documentation and clarify the current behavior.
Follow up to 6a5de923918d2bf02338b97d4fd37f90d43833e4
|
|
Prevents possible crashes when we can't fast seek.
|
|
Check the extension of the output file earlier on. Check all
possible compression extensions, and error on compression types
we read but don't write (whether specified via extension magic
or explicitly.)
If the user asked to write a compressed type to standard out,
give the user what they asked for, as that can be useful to pipe.
When erring on a file format that can't be compressed, give the
file format name in the error message.
Follow up to 6a5de923918d2bf02338b97d4fd37f90d43833e4
|
|
Fix printing "mergecap" to stderr instead of "tshark"
Add a function to convert compression extensions to compression types.
Change the name of the currently unused "wtap_can_write_extension",
which checks a "wtap_compression_type" not an extension, and use it.
Do the output file name checking for compression extension
magic in the main loop. Check all supported compression extensions,
and output an error for compression extensions supported for reading
but not for writing (whether by extension or if specified with
--compress), instead of simply writing an uncompressed file -
( ./run/tshark --compress=lz4 -r input.pcap -w output.pcapng.lz4 right
now will write an uncompressed pcapng instead of an error.)
This allows tshark to complain about not being able to write
compressed output for a live capture or for a particular file
format in one place. Add the name and/or description of the
file formats or compression type that isn't allowed to be written
to the error message.
process_cap_file is never called when capturing, remove the
"is capturing" parameter just added; the test is done in the main
loop (unlike merging.)
Allow writing compressed data to stdout if the user asks for it;
there can be a use case involving piping.
|
|
Wireshark supports compressing the output file to a gzip archive
via wtap API. For mergecap and editcap, if the output filename
has the extension .gz, then the gzip compression would be used
to store the output.
Fixes #12385
|
|
This makes the twisty little maze of ifdefs, all different, slightly
less twisty, and makes it a little easier to figure out where to add
code for new compression types.
|
|
For wtap_dump_file_open and wtap_dump_file_fdopen, use
a switch on compression type in all cases, so we have fewer
ifdefs depending on whether the wtap_dumper parameter is used
or not. This makes it a bit easier to add new compression types
later.
|
|
In practice, I don't believe this matters, because we always
read through the file sequentially once and only add fast seek
points during that initial read. This code from zlib is for when
a seek happens during a time when you might still be adding fast
seek points.
|
|
Always do a switch on the compression state, so we don't have to do some
ugly maze of #ifdefs so that we mark the argument unused iff we don't
support gzip compression, and so that other forms of compression have a
place in which to insert reset code.
|
|
A WFILE_T can be a handle for *any* code that writes compressed files.
|
|
Add GUI support for saving in LZ4 format as well as gzip and
uncompressed. Replace the current checkbox with a group box.
The group box looks better alongside the packet range group box
for Export Packet Dissections and would be appropriate to
substitute into the Capture Options dialog in a later commit;
a combobox might look more natural for the ordinary Save As window.
Change the work to fix up the file extension a bit, so that
it can switch between .gz and .lz4
|
|
Since LZ4 fast seek is now supported (for independent frames,
the default with the lz4 command line), add support for writing
LZ4 as well.
|
|
Support for fast seeking in LZ4 Frames. Only works for independent
blocks currently (linked blocks could work using the same methods
as zlib by storing the window, by using the LZ4 low level Block
API instead of the Frame API).
|
|
Fix some calculations when an uncompressed section appears
after a compressed stream has finished. We want to copy only
the unused portion of the in buffer, not all of it. Don't
include what has already been copied to the out buffer.
This allows an uncompressed area after the end of a gzip
stream to work properly. Note that the uncompressed portion
has to be at the end - there's no detection that uncompressed
data has suddenly become compressed.
|
|
The next 4 bytes that have the magic number are located
in state->in.next, which may not be state->in.buf if we're
in the middle of a file and have finished one stream and beginning
another.
Fixes concatenated lz4 frames, etc.
|
|
There's more of a question than a comment about why there's fast seek
data added for decompressed data. Add the best answer I have - possibly
some idea about supporting concatenated uncompressed data after a
compressed stream.
[skip ci]
|
|
|
|
In nettrace_parse_address, there is no need to copy the matched parts of the
regular expression into separate buffers; g_match_info_fetch_named already
provides a NUL-terminated allocated buffer containing just the match.
While we're here, stop leaking those strings returned by
g_match_info_fetch_named. Also, insert a temporary NUL-terminator at the end
of the string we are passing into g_regex_match so that it will stop at the
expected location.
Fix #19940
|
|
If the time string after stripping delimiters is too long, then
it's not valid just as much as if it was too long before stripping.
The time stamps could be checked even more for validity, but this
is necessary to avoid writing too long strings to a buffer.
Fix #19939
Co-authored-by: Darius Davis <darius-wireshark@free-range.com.au>
|
|
Give the "FILETIME, but with nanosecond resolution" routine a name
similar to the "FILETIME, but with second resolution".
|
|
Our lists are now at lists.wireshark.org.
|
|
We only need to check for a extra (real) sync bytes in the initially
detected trailer length (for cases where there is a header that
contains a spurious sync byte). This prevents a possible underflow
of trailer_len.
Fix #19938
|
|
|
|
A break after return is not needed. Solaris would give a
"statement not reached" warning.
|
|
Instead of building an array of file-handler pointers, refactor so that each
file format is attempted, following the same order as existed before. Less
work needs to be done upfront, and there's no more messy arrays-of-pointers to
pass around.
|
|
Reduce code duplication by restructuring the code for walking through the list
of potential file types when opening a file, rolling up six instances of
near-identical code into one. Behavior should be unchanged by this commit.
While we're here, fix a trivial typo in wiretap/README.developer .
|
|
|
|
As README.developer says:
"Avoid GLib synonyms like gchar and gint and especially don't use
gpointer and gconstpointer, unless you are writing GLib callbacks and
trying to match their signature exactly. These just obscure the code and
gconstpointer in particular is just semantically weird and poor style."
We didn't convert gconstpointers in convert-glib-types.py until
5f807da9ba, so make another pass and do so on everything except our
dissector code. Convert some gpointers as well.
Ping #19116
|
|
|
|
|