aboutsummaryrefslogtreecommitdiffstats
path: root/epan/dissectors/packet-kafka.c
AgeCommit message (Collapse)AuthorFilesLines
2024-02-13check_spelling.py: add globsMartin Mathieson1-1/+1
2024-01-19Dissectors: remove accidental double-colonsMartin Mathieson1-1/+1
2023-11-22Change some `wmem_packet_scope()` to `pinfo->pool`David Perry1-3/+3
As requested [here][1], help with removing calls to `wmem_packet_scope()` in favour of references to `pinfo->pool`. * Plugins chosen semi-alphabetically. * When a calling function already has a `pinfo` argument, use that. * Remove `_U_` from its signature if it was there. * If a function seems narrowly focused on getting and (possibly) returning memory, change the function signature to take a `wmem_allocator_t *`. * If it seems more focused on packet-based operations, pass in a `packet_info *` instead and use `pinfo->pool` within. * Some of the files in this MR still have references to `wmem_packet_scope()` where it would take significant work to remove. These will need revisiting later. [1]: https://www.wireshark.org/lists/wireshark-dev/202107/msg00052.html
2023-11-20Remove init of proto variablesStig Bjørlykke1-234/+234
Remove init of proto, header field, expert info and subtree variables. This will reduces the binary size by approximate 1266320 bytes due to using .bss to zero-initialize the fields. The conversion is done using the tools/convert-proto-init.py script.
2023-08-23kakfa: fix sync_group_request missing version check for instance_idAlexis La Goutte1-2/+4
Close #19290
2023-05-28kafka: Don't use after freeJohn Thacker1-7/+17
Neither tvb_new_child_real_data() nor tvb_composite_append() copy the real data buffer that they're given. So we can't free a decompressed buffer after making it a tvb. We can realloc if the output size is smaller. Fix #19105
2023-05-27kafka: Allow reused correlation IDs on a connectionJohn Thacker1-7/+7
Allow reused correlation IDs in the same connection by using a multimap, since apparently that's possible. (This still doesn't help you if you have an out of order capture such that you have multiple requests with the same ID before any responses.) Fix #19021
2022-12-28use uncompress_zstd in KafkaKevin Albertson1-47/+7
2022-12-05Kafka: Add more loop checksJoão Valverde1-9/+29
Add a safeguard to limit the maximum number of iterations. Do not allocate a new buffer for every loop iterations in a loop that depends on the result of the decompression routine. Either allocate the buffer once or free after use. Defensive programming is more important than speed in this case.
2022-12-05kafka: fix note of ZSTD_decompressStream returnKevin Albertson1-2/+5
2022-12-05kafka: stop decompressing once all input is consumedKevin Albertson1-3/+2
2022-12-01kafka: Don't try to decompress if the length is zero.John Thacker1-0/+7
There's no point in trying to decompress a message with length zero, and some of the third party decompression libraries (e.g. zstd) can give unexpected results that lead to infinite loops if we do so. A message length zero is almost surely a file with errors.
2022-08-12check_typed_item_calls.py: check for consecutive calls to same itemMartin Mathieson1-4/+0
2022-01-17Kafka: Make sure a string pointer is valid.Gerald Combs1-7/+6
Make sure dissect_kafka_string_new always sets a valid display string. Fixes #17880.
2022-01-15kafka: Fix Clang Warning Uninitialized argument valueAlexis La Goutte1-1/+1
2021-12-29kafka: have dissect_kafka_string_new() return the display string.Guy Harris1-18/+18
Instead of having it return the information needed to fetch the string value, just have it return the string to use to display that string, as that's all its only caller needs. (Note that the display string has had control characters, etc. escaped, which is what you want for text that appears in a string displayed in the protocol details.)
2021-12-29kafka: remove compiler warnings.Dario Lombardo1-1/+1
1508 | proto_item_append_text(header_ti, " (Key: %s)", | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1509 | tvb_get_string_enc(pinfo->pool, tvb, key_off, key_len, ENC_UTF_8)); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../epan/dissectors/packet-kafka.c:1501:18: note: ‘key_len’ was declared here 1501 | int key_off, key_len; | ^~~~~~~ ../epan/dissectors/packet-kafka.c:1508:5: warning: ‘key_off’ may be used uninitialized in this function [-Wmaybe-uninitialized] 1508 | proto_item_append_text(header_ti, " (Key: %s)", | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1509 | tvb_get_string_enc(pinfo->pool, tvb, key_off, key_len, ENC_UTF_8)); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../epan/dissectors/packet-kafka.c:1501:9: note: ‘key_off’ was declared here 1501 | int key_off, key_len; | ^~~~~~~
2021-12-29Kafka: Add back some code.Gerald Combs1-0/+18
a03f43645d removed some code that set offset and length parameters. Add it back.
2021-12-28Kafka: Be more strict when dissecting varints.Gerald Combs1-36/+19
The Kafka dissector uses the return value of tvb_get_varint to advance the packet offset in many places. If tvb_get_varint fails it returns 0, which means our offset isn't guaranteed to advance. Stop dissection whenever that happens. Fixes #17811.
2021-12-19Replace g_snprintf() with snprintf() (dissectors)João Valverde1-10/+10
Use macros from inttypes.h with format strings.
2021-12-03epan: Remove STR_ASCII and STR_UNICODEJoão Valverde1-34/+34
These display bases work to replace unprintable characters so the name is a misnomer. In addition they are the same option and this display behaviour is not something that is configurable. This does not affect encodings because all our internal text strings need to be valid UTF-8 and the source encoding is specified using ENC_*. Remove the assertion for valid UTF-8 in proto.c because tvb_get_*_string() must return a valid UTF-8 string, always, and we don't need to assert that, it is expensive.
2021-08-24[build] fix warnings for unused variablesLin Sun1-2/+0
2021-07-27Change some `wmem_packet_scope()` to `pinfo->pool`David Perry1-87/+87
2020-12-08Kafka: Decrease our maximum decompression buffer size.Gerald Combs1-1/+3
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.java maxes out at 2^22, so use that.
2020-12-02Kafka: Limit our decompression size.Gerald Combs1-24/+32
Don't assume that the Internet has our best interests at heart when it gives us the size of our decompression buffer. Assign an arbitrary limit of 50 MB. This fixes #16739 in that it takes care of ** (process:17681): WARNING **: 20:03:07.440: Dissector bug, protocol Kafka, in packet 31: ../epan/proto.c:7043: failed assertion "end >= fi->start" which is different from the original error output. It looks like *that* might have taken care of in one of the other recent Kafka bug fixes. The decompression routines return a success or failure status. Use gbooleans instead of ints for that.
2020-11-08Kafka: Fixup returned offsets and initialize variables.Gerald Combs1-19/+48
Many of the Kafka dissector's type dissection routines either returned an offset or -1 in the event of an error. We don't appear to check for errors anywhere, so ensure that those routines always return a valid offset. Make those routines always initialize their type offset and length variables. Fixes #16985.
2020-10-09kafka: fix uninitialized valueAlexis La Goutte1-6/+6
found by clang analyzer
2020-09-23Kafka: Check returned offsets.Gerald Combs1-2/+15
dissect_kafka_regular_bytes might return -1, so handle that in dissect_kafka_message_old. Closes #16784.
2020-09-09ieee80211: fix Wmissing-prototypesAlexis La Goutte1-4/+4
no previous prototype for function 'add_ff_action_public_fields' [-Wmissing-prototypes] Change-Id: I8be64454a21187cf60a04c903acfbb18f2a12095
2020-07-31Fixed the usage of proto_tree_add_bytesPiotr Smolinski1-40/+30
Bug: 16744 Change-Id: I57e37a3e8a7b3213a381a43b366bad87a39c6625 Reviewed-on: https://code.wireshark.org/review/38000 Petri-Dish: Peter Wu <peter@lekensteyn.nl> Tested-by: Petri Dish Buildbot Reviewed-by: Peter Wu <peter@lekensteyn.nl>
2020-07-24Support for Kafka 2.5Piotr Smolinski1-1235/+2446
The change is massive, mostly due to KIP-482. The flexible version coding affects every string, bytes or array field. In order to keep the compatibility the old and new style field codings must stay next to each other. Plus: * correlation-id request/response matching * new fields (other than KIP-482) * some fixes to the messages that were not tested sufficiently before Bug: 16540 Bug: 16706 Bug: 16708 Change-Id: I39b1b6a230e393d3bee3e3d8625541add9c83e5d Reviewed-on: https://code.wireshark.org/review/37886 Petri-Dish: Martin Mathieson <martin.r.mathieson@googlemail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2020-07-19kafka: zstd: free the composite tvb only onceMartin Kaiser1-8/+11
Fix the composite tvb handling for zstd decompression in the same way as we already did for lz4 and snappy. Allocate the composite tvb only if we are cetain that data will be added to it. Do not free the composite tvb ourselves, leave this to epan cleanup. Change-Id: Iac74ea6e6d220b05858a7eb267276ff983b1b2ab Reviewed-on: https://code.wireshark.org/review/37900 Reviewed-by: Martin Kaiser <wireshark@kaiser.cx> Petri-Dish: Martin Kaiser <wireshark@kaiser.cx> Tested-by: Petri Dish Buildbot Reviewed-by: Alexis La Goutte <alexis.lagoutte@gmail.com>
2020-07-13kafka: snappy: free the composite tvb only onceMartin Kaiser1-9/+10
The snappy decompression routine has the same bug that was fixed for lz4 in 79576219c9 ("kafka: lz4: free the composite tvb only once"). Refactor the composite tvb handling for snappy as well. Allocate the composite tvb only if we are cetain that data will be added to it. Do not free the composite tvb ourselves, leave this to epan cleanup. Change-Id: Ide3a88d1c02e525fe1aadd176068ce68c2330b98 Reviewed-on: https://code.wireshark.org/review/37838 Reviewed-by: Martin Kaiser <wireshark@kaiser.cx> Petri-Dish: Martin Kaiser <wireshark@kaiser.cx> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2020-07-05kafka: lz4: free the composite tvb only onceMartin Kaiser1-8/+11
Try to clean up the composite tvb handling during lz4 decompression. If we detect an error straight away before doing any lz4 decompression, we don't allocate a composite tvb at all. The comments in the tvb code say explicitly that we must not call tvb_new_composite() without adding at least one piece of data. If we start decompressing and run into problems after creating the composite tvb and linking it to the packet's main tvb, we must not free the composite tvb manually. The epan library will do this for us when dissection of the packet is finished. While at it, make sure that we always finalize the composite tvb if we allocated it and added data to it. Bug: 16672 Change-Id: I3e3fb303a823640d7707277a109019fc3aad22f2 Reviewed-on: https://code.wireshark.org/review/37696 Petri-Dish: Alexis La Goutte <alexis.lagoutte@gmail.com> Reviewed-by: Anders Broman <a.broman58@gmail.com>
2020-06-12Kafka: fix the FETCH response alignment issuePiotr Smolinski1-13/+35
There was a problem in FETCH response parsing when the server had more data than the requested maximal return size. In such case the server checks if the first chunk of data fits into buffer. If it does not, the first chunk is returned as a whole to the requestor. Otherwise it is assumed that the client is capable of discarding invalid content and the server pushes maximum available block. It makes sense, because the default block is 10MB and pushing it opaque leverages zero-copy IO from the file system to the network. In the existing implementation it was assumed that the last batch is aligned with the end of the buffer. Actually, if there is some data more, the last part is delivered truncated. This patch: * fixes the last part alignment handling * adds opaque field for truncated content * moves preferred replica field to the proper context Bug: 16623 Change-Id: Iee6d513ce6711091e5561646a3fd563501eabdda Reviewed-on: https://code.wireshark.org/review/37446 Petri-Dish: Alexis La Goutte <alexis.lagoutte@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2020-02-15Some issues spotted by PVS-Studio in bug 16335. Many more remainMartin Mathieson1-6/+0
Change-Id: If856e25af8e33eeef5b9e595f1f6820459892b17 Reviewed-on: https://code.wireshark.org/review/36110 Petri-Dish: Martin Mathieson <martin.r.mathieson@googlemail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-12-02kafka: don't use an empty tvb list.Dario Lombardo1-3/+2
Bug: 16242 Change-Id: I1a7cfa504d46cab681c7803227102cafcda519fa Reviewed-on: https://code.wireshark.org/review/35277 Petri-Dish: Dario Lombardo <lomato@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Michael Mann <mmann78@netscape.net>
2019-10-27Add more checks, fail for negative byte block lengths.Guy Harris1-19/+50
Have dissect_kafka_string_new() set a flag if the length was negative. If the length is negative, don't try to process what comes afterwards. Make the length argument to decompression routines unsigned, and do various checks. Don't try to decompress a zero-length block, and quit if the decompressed block is zero-length. Bug: 16082 Change-Id: I34c2ea99aa096b3f5724d9b113171b105bd6c60b Reviewed-on: https://code.wireshark.org/review/34867 Petri-Dish: Guy Harris <guy@alum.mit.edu> Tested-by: Petri Dish Buildbot Reviewed-by: Guy Harris <guy@alum.mit.edu>
2019-10-18Kafka: Fix a length check.Gerald Combs1-1/+1
Skip past our chunk size before checking our available length. Bug: 16117 Change-Id: I39ddf1f6861de3b3adea59df2f30abfe3a4f7295 Reviewed-on: https://code.wireshark.org/review/34795 Reviewed-by: Gerald Combs <gerald@wireshark.org> Petri-Dish: Gerald Combs <gerald@wireshark.org> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-09-09Kafka: Fix Dead StoreAlexis La Goutte1-1/+1
Fix dead store (Dead assignement/Dead increment) Warning found by Clang Change-Id: I013c1bdc943033550f497b1be0dfc7979ca49517 Reviewed-on: https://code.wireshark.org/review/34484 Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-09-09Kafka: Fix Dead StoreAlexis La Goutte1-5/+5
Fix dead store (Dead assignement/Dead increment) Warning found by Clang Change-Id: I3ac2e2b6a1ed7621f65f1a98e8b7b3704e8b299d Reviewed-on: https://code.wireshark.org/review/34481 Petri-Dish: Alexis La Goutte <alexis.lagoutte@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-09-05kafka: Cleanup to use "native" APIs.Michael Mann1-143/+65
Add "native" support for the "zig-zag" version of a varint in proto.[ch] and tvbuff.[ch]. Convert the use of varint in the KAFKA dissector to use the (new) "native" API. Ping-Bug: 15988 Change-Id: Ia83569203877df8c780f4f182916ed6327d0ec6c Reviewed-on: https://code.wireshark.org/review/34386 Petri-Dish: Alexis La Goutte <alexis.lagoutte@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Alexis La Goutte <alexis.lagoutte@gmail.com> Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-08-27Kafka: fixed OffsetForLeaderEpoch dissectionPiotr Smolinski1-132/+155
Bug: 16023 Change-Id: I78e1354ac5509707c818d7968c7067583fb469ba Reviewed-on: https://code.wireshark.org/review/34379 Petri-Dish: Michael Mann <mmann78@netscape.net> Tested-by: Petri Dish Buildbot Reviewed-by: Michael Mann <mmann78@netscape.net>
2019-08-27kafka: remove unused hf/ei entries.Dario Lombardo1-15/+0
Change-Id: I98a3a1456fbfeb726a1a81a0b46714556fe951cd Reviewed-on: https://code.wireshark.org/review/34383 Petri-Dish: Anders Broman <a.broman58@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-08-20Kafka: include zstd compression in Kafka message batchesPiotr Smolinski1-0/+42
Change-Id: I1d06486ccf7b174ee9aa621fa3d8acb8b3673777 Reviewed-on: https://code.wireshark.org/review/34222 Petri-Dish: Anders Broman <a.broman58@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-08-20Kafka: fix the name shadowingPiotr Smolinski1-4/+8
Post-merge fix. Change-Id: I712d275f90c5a1e425865654143ead7c3a04998b Reviewed-on: https://code.wireshark.org/review/34332 Petri-Dish: Anders Broman <a.broman58@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-08-20Kafka: add support for Kafka 2.3+ dissectionPiotr Smolinski1-516/+5318
Existing Apache Kafka support in Wireshark ends at version 0.10. The version 0.11 (June 2017) brought significant changes to the message format. This change makes the Wireshark Kafka dissector obsolete. The recently released Kafka 2.3 has a lot of additions to the wire protocol, which should be also addressed. Major changes: * Applied Kafka protocol changes since 0.10 * Zstd-packed message decompression (since Kafka 2.1) * Added support for Kafka over TLS decryption Bug: 15988 Change-Id: I2bba2cfefa884638b6d4d6f32ce7d016cbba0e28 Reviewed-on: https://code.wireshark.org/review/34224 Petri-Dish: Anders Broman <a.broman58@gmail.com> Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>
2019-07-26HTTPS (almost) everywhere.Guy Harris1-1/+1
Change all wireshark.org URLs to use https. Fix some broken links while we're at it. Change-Id: I161bf8eeca43b8027605acea666032da86f5ea1c Reviewed-on: https://code.wireshark.org/review/34089 Reviewed-by: Guy Harris <guy@alum.mit.edu>
2019-04-04epan: Convert our PROTO_ITEM_ macros to inline functions.Gerald Combs1-7/+7
Convert our various PROTO_ITEM_ macros to inline functions and document them. Change-Id: I070b15d4f70d2189217a177ee8ba2740be36327c Reviewed-on: https://code.wireshark.org/review/32706 Reviewed-by: Gerald Combs <gerald@wireshark.org> Petri-Dish: Gerald Combs <gerald@wireshark.org> Reviewed-by: Anders Broman <a.broman58@gmail.com>
2018-12-01Apply port preferences during dissector handoff registrationJaap Keuter1-0/+1
Handling of preferences is often done in the dissector handoff registration. Therefore this function is often registered as callback while registering preference handling for the module. In this way the preferences are processed both when registering the dissector and when changes happen. Some dissectors opt to register a seperate callback function to be called when preferences change. Now these have to be called from the dissector handoff function explicitly, in order to have the preferences processed during dissector registration. This becomes explicitly apparent when the port registration comes into play. With the migration to using dissector registration on ports with preference this port (range) is often retrieved from the preferences to match against the ports in a packet to determine an incoming or outgoing packet of a server. In case the callback function is not called from the dissector registration this determination fails, until the preferences are applied/changed, causing the preference handling callback to be called. This change add the calling of the callback during dissector registration, fixing some dissector port registrations in the process. Change-Id: Ieaea7f63f8f9062c56582a042a3a5a862e286406 Signed-off-by: Jaap Keuter <jaap.keuter@xs4all.nl> Reviewed-on: https://code.wireshark.org/review/30848 Tested-by: Petri Dish Buildbot Reviewed-by: Anders Broman <a.broman58@gmail.com>