Age | Commit message (Collapse) | Author | Files | Lines |
|
This is required to deal with the increased traffic of a passive listener
Note that it break the 'auto-restart' of osmocon when active because
the bootloader will send the prompt at 115200 baud and we won't see it ...
Change-Id: I3434bb020286ab72ba3556124786656eeacf10a9
Signed-off-by: Sylvain Munaut <tnt@246tNt.com>
|
|
TPU_DEBUG used to read from TPU RAM, which unfortunately seems rather
slow, so copying it over from there broke overall timing leading to
infamous "DSP Error 24" when TPU_DEBUG is enabled.
Change-Id: Idde061df8c129aa51b2e4540c8ef2e4116468c9c
|
|
We need to make sure to allocte sufficient space to include
the 32bit frame number at the start of the TPU_DEBUG msgb.
Change-Id: Ifb3ce6f91131fc361b20c3b3fe5ebc7079633ac3
|
|
The original code used simplified logic whereby it assumed that
Spansion flash means MG01GSMT and Samsung flash means MGCxGSMT.
However, there exist MGC2GSMT hw variants with Spansion S71PL032J
flash in them, thus it is necessary to check the complete device ID
rather than just the flash manufacturer ID to distinguish between
MG01GSMT with 8 MiB flash (S71PL064J) and MGCxGSMT with 4 MiB flash
(S71PL032J, K5A3281CTM or K5L3316CAM).
Distinguishing between 4 MiB and 8 MiB flash chip types is also
necessary in order to configure TIFFS reader for the correct FFS
location matching that used by the original firmware, which is
in turn necessary in order to read factory RF calibration values.
Closes: OS#4769
Change-Id: Iaa5bd295e9cbf6b525fa385f9d6cd7fcd7f8a4dd
|
|
* Switch Calypso output CS4/ADD22 to ADD22 function as needed
in order to access the upper half of the flash on GTM900 hw
variant MG01GSMT.
* Set WS=4 for safety - please refer to this technical article for
the underlying theory:
https://www.freecalypso.org/hg/freecalypso-docs/file/tip/MEMIF-wait-states
Related: OS#4769
Change-Id: I1923243937d7251f6bcfe71a0b1cc0e206a81cfa
|
|
This change fixes one bug and one uncertainty:
Bug: Huawei defined Calypso GPIO 3 to be DTR input on this modem,
following TI's precedent from C-Sample and D-Sample platforms.
(Huawei's documentation calls the corresponding FPC interface pin
UART_DTR without even mentioning that it is actually wired to
Calypso GPIO 3 in the hardware.)
The previous code (erroneously copied from gta0x target which is
different in this regard) configured this GPIO to be an output,
creating a driver conflict.
Uncertainty: GPIOs 4, 6, 10, 11 and 12 power up as inputs, and
Huawei's official fw leaves them as such. But in the absence of
someone reverse-engineering a sacrificial GTM900 module by slicing
its PCB and imaging its copper layers and vias, we don't know if
these Calypso pins are simply unconnected like they are on Openmoko
devices (in which case they are floating inputs and should be
switched to driving dummy outputs), or if they are tied off in the
hardware in one way or another, in which case leaving them as inputs
is correct.
On the reasoning that floating inputs are a lesser evil than driver
conflicts or shorted outputs, leave these GPIOs as inputs until
we gain better knowledge of this aspect of the hardware.
Related: OS#4769
Change-Id: Ia41f8bc19fb1775b0587fe1ceaa8acd066710aa5
|
|
GTM900-B can share almost all calibration tables with GTA0x and FCDEV3B,
only the VCXO is significantly different.
Related: OS#3582
Change-Id: I52b63b1d086452139b1efd308d47a4183eace745
|
|
We have new hardware targets that have appeared since the original
OS#3582 patch was created, namely Huawei GTM900-B and the upcoming
FreeCalypso Caramel2 board. These new targets need the same APC
offset as gta0x and fcdev3b (TI's original Leonardo value), they
have proper calibration records in their FFS (meaning that all
compiled-in numbers become no-effect placeholders), and their PA
tracts are similar enough to Openmoko/FCDEV3B to where even in the
absence of calibration OM/FC numbers are close enough. Thus most
of the tables in board/gta0x/rf_tables.c should be reusable by
these new targets.
However, these new targets have quite different VCXOs from Openmoko
and FCDEV3B, thus they need different AFC parameters. Thus we split
board/gta0x/afcparams.c from board/gta0x/rf_tables.c, making the
latter more reusable.
Related: OS#3582
Change-Id: I92e245843253f279dd6d61bd5098766694c5215f
|
|
Since If6e212baeb10953129fb0d5253d263567f5e12d6, we can read the TIFFS
file-system, thus we can read and use the factory RF calibration values.
* Implement parsing of factory RF calibration values for Motorola C1xx,
Openmoko GTA0x, Pirelli DP-L10, and upcoming FCDEV3B targets.
* Remove the old Tx power level control code and tables, and replace
them with new logic that exactly matches what the official chipset
firmware (TI/FreeCalypso) does, using tables in TI/FreeCalypso
format. Compiled-in tables serve as a fallback and match each
target's respective original firmware.
* Use individual AFC slope values for different targets. The original
value was/is only correct for the Mot C1xx family, whereas
GTA0x/FCDEV3B and Pirelli DP-L10 need different values because
Openmoko's VCXO (copied on the FCDEV3B) and Pirelli's VCTCXO
are different from what Motorola used.
* Take the initial AFC DAC value for the FB search from factory
calibration records on those targets on which it has been
calibrated per unit at the factory.
* Use individual APC offset for different targets instead of
the hard-coded value. The Mot/Compal's and Pirelli's firmwares
(both heavily modified relative to TI) use different APC offset
settings: 32 for Compal and 0 for Pirelli, while Openmoko and
FreeCalypso devices use 48.
Change-Id: Icf2693b751d86ec1d2563412d606c13d4c91a806
Related: OS#3582
|
|
Change-Id: Ibbdb0093d8f502dcd57ea92b53e7e56b09ee9e5f
|
|
To make the situation about stdint.h even more complicated, this
toolchain doesn't anymore #define __int8_t_defined, which means
we again run into conflicting definitions :/
Let's try to use INT8_MAX as a key.
Change-Id: I1a74cdcd03366390e88b2d5bddf01329410b9f1c
|
|
Change-Id: I67d16858cd70cb0527c1da77bd3787d5e53100b4
|
|
GIT_SHORTHASH is used by the recently introduced snake game.
Change-Id: I837e3dcc5c44e64ca7f6c243c08981ed01f35dd1
|
|
Change-Id: I3c3f012552f2a7474ade911fc071c89e55e19352
|
|
Change-Id: Id8856ace2a31ba4ebcd04746e0c96c23a679cc40
|
|
button is pressed
I am not sure how other developers do this. There are probably better ways to
make testing faster but I kind of like it this way.
I just call the twl3025_power_off_now function when the power key is pressed.
Change-Id: I1e55910acd8584c74e5e190b3334a8cf6987f5f3
|
|
When a dedicated channel is activated, in chan_nr2mf_task_mask()
we calculate a bitmask of the corresponding multi-frame tasks to
be enabled. Three logical kinds of the multi-frame tasks exist:
- primary (master) - the main burst processing task,
e.g. MF_TASK_{TCH_F_ODD,SDCCH4_0,GPRS_PDTCH};
- secondary - additional burst processing task (optional),
e.g. MF_TASK_GPRS_PTCCH;
- measurement - neighbour measurement task (optional),
e.g. MF_TASK_NEIGH_{PM51,PM26E,PM26O}.
By default, the primary task is set to MF_TASK_BCCH_NORM (0x00).
Due to a mistake, the secondary task has also been set to BCCH,
so when we switch to a dedicated mode, we also enable the BCCH.
This leads to a race condition between the multi-frame tasks,
when both primary and secondary ones read bursts from the DSP
at the same time, so the firmware hangs because of that:
nb_cmd(0) and rxnb.msg != NULL
BURST ID 2!=0 BURST ID 3!=1
This regression was introduced together with experimental PDCH
support [1]. Let's use value -1 to indicate that the secondary
task is not set, and apply it properly.
Change-Id: I4d667b2106fd8453eac9e24019bdfb14358d75e3
Fixes: [1] I44531bbe8743c188cc5d4a6ca2a63000e41d6189
Related: OS#3155
|
|
Change-Id: I91780146d066c45c42b037c22cb49fd8a96e832b
|
|
Change-Id: Ide7b0527ad64a044977a10da4a82a8ecd1fbd8dc
|
|
Change-Id: I78163d41be3a912da1dd8c0543b1c3af3a0649fa
Related: OS#4681
|
|
DATAMSG.gen_msg() does validete the message before encoding.
Change-Id: Ia3691b3c18778cf7a1f16c71bef5c0b2e6241190
Related: OS#4681
|
|
This reverts commit 6e1c82d29836496b20e0d826976d9e71b32493d8.
Unfortunately, solving one problem it introduced even more regressions.
Change-Id: If29b4f6718cbc8af18fe18a5e3eca3912e8af01e
Related: OS#4658
|
|
Change-Id: I40628d32409543c9f4b40b7268a4538b4671102d
|
|
Change-Id: I16c63205c9133d964048588c25867ac7c310f951
|
|
TRX Toolkit is still backwards compatible with Python2, but Python3
does much better in terms of performance. Also, on Debian Stretch
that is used as a base for our Docker images, Python 2.7 is still
the default. Let's require Python3 in shebang.
Change-Id: I8a1d7c59d3b5d49ec2ed94a7c77905e02134f216
|
|
Change-Id: I5ddc531a4e98d4d6f8672d6ef14034fce605ba3d
|
|
In order to reflect the UL/DL delay caused by the premature burst
scheduling (a.k.a. 'fn-advance') in a virtual environment, the
Transceiver implementation now queues all to be transmitted bursts,
so they remain in the queue until the appropriate time of transmission.
The API user is supposed to call recv_data_msg() in order to obtain
a L12TRX message on the TRXD (data) inteface, so it gets queued by
this function. Then, to ensure the timeous transmission, the user
of this implementation needs to call clck_tick() on each TDMA
frame. Both functions are thread-safe (queue mutex).
In a multi-trx configuration, the use of queue additionally ensures
proper burst aggregation on multiple TRXD connections, so all L12TRX
messages are guaranteed to be sent in the right order, i.e. with
monolithically-increasing TDMA frame numbers.
Of course, this change increases the overall CPU usage, given that
each transceiver gets its own queue, and we need to serve them all
on every TDMA frame. According to my measurements, when running
test cases from ttcn3-bts-test, the average load is ~50% higher
than what it used to be. Still not significantly high, though.
Change-Id: Ie66ef9667dc8d156ad578ce324941a816c07c105
Related: OS#4658, OS#4546
|
|
Change-Id: I85b2182d9835ed035cf370e45ea039ac6a7e8405
|
|
Change-Id: I157447c7610402f6d62d2b74c9f04fcaa0bc1724
|
|
Change-Id: I6d53e5266fa3b1f2eb55822d1c14975789b202ed
|
|
Change-Id: Ic1f44bfb21ac3173e9530a0a9966cd5e64b8bd48
|
|
Running with cProfile shows that there are quite a lot calls:
469896 0.254 0.000 0.254 0.000 trx_list.py:37(__getitem__)
Let's better avoid using it in performance critical parts.
Change-Id: I2bbc0a2af8218af0b9a02d8e16d4216cf602892a
|
|
Change-Id: I1c589888991add435d88517094c7b4a7db93cbae
|
|
Change-Id: Icfc403e500c24628da722ab378fba31923afd1a1
|
|
This change fixes several warnings reported by GCC 10.1.0:
apps/rssi/main.c:238:30: warning: 'sprintf' may write a terminating
nul past the end of the destination
apps/rssi/main.c:238:4: note: 'sprintf' output between 10 and 17
bytes into a destination of size 16
apps/rssi/main.c:413:26: warning: '.' directive writing 1 byte into
a region of size between 0 and 9
apps/rssi/main.c:413:3: note: 'sprintf' output between 10 and 20
bytes into a destination of size 16
Change-Id: I7980727b78f7622d792d82170f73c90ac5770397
|
|
Both symbols are declared in 'layer1/prim.h'.
Change-Id: I36f41870bd63c70259316204ee17071853257ca4
|
|
These symbols are defined, but never used:
- struct last_rach - seems to be copy-pasted from prim_rach.c,
- tall_msgb_ctx - already defined in libosmocore.
Change-Id: I6077c8e9b441f7848d1a4c25a8b5e1aed82f4b7d
|
|
By default RSSI on the Rx side is computed based on transmitter's
tx power and then substracting the the Rx path loss.
If FAKE_RSSI is used, then the values in there are used instead.
A default hardcoded value of tx nominal power = 50 dBm is set to keep
old behavior of RSSI=-60dB after calculations.
Change-Id: I3ee1a32ca22c3272e66b3ca78e4f67d283844c80
|
|
Change-Id: I00126a90446e5f3fb77a46be9d7d5dbff89fa221
|
|
Jenkins build #2516 has uncovered a problem in DATADumpFile.parse_msg():
======================================================================
FAIL: test_parse_empty (test_data_dump.DATADump_Test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/build/src/target/trx_toolkit/test_data_dump.py",
line 138, in test_parse_empty
self.assertEqual(msg, False)
AssertionError: None != False
I did a quick investigation, and figured out that this failure
happens when trying to call parse_msg() with idx == 0, because
DATADumpFile._seek2msg() basically does nothing in this case
and thus always returns True. The None itself comes from
DATADumpFile._parse_msg().
Let's ensure that DATADumpFile.parse_msg() always returns None,
even if DATADumpFile._seek2msg() fails. Also, update the unit
test, so we always test a wide range of 'idx' values.
Change-Id: Ifcfa9c5208636a0f9309f5ba8e47d282dc6a03f4
|
|
There are two ways to implement frequency hopping:
a) The Transceiver is configured with the hopping parameters, in
particular HSN, MAIO, and the list of ARFCNs (channels), so the
actual Rx/Tx frequencies are changed by the Transceiver itself
depending on the current TDMA frame number.
b) The L1 maintains several Transceivers (two or more), so each
instance is assigned one dedicated RF carrier frequency, and
hence the number of available hopping frequencies is equal to
the number of Transceivers. In this case, it's the task of
the L1 to commutate bursts between Transceivers (frequencies).
Variant a) is commonly known as "synthesizer frequency hopping"
whereas b) is known as "baseband frequency hopping".
For the MS side, a) is preferred, because a phone usually has only
one Transceiver (per RAT). On the other hand, b) is more suitable
for the BTS side, because it's relatively easy to implement and
there is no technical limitation on the amount of Transceivers.
FakeTRX obviously does support b) since multi-TRX feature has been
implemented, as well as a) by resolving UL/DL frequencies using a
preconfigured (by the L1) set of the hopping parameters. The later
can be enabled using the SETFH control command:
CMD SETFH <HSN> <MAIO> <RXF1> <TXF1> [... <RXFN> <TXFN>]
where <RXFN> and <TXFN> is a pair of Rx/Tx frequencies (in kHz)
corresponding to one ARFCN the Mobile Allocation. Note that the
channel list is expected to be sorted in ascending order.
NOTE: in the current implementation, mode a) applies to the whole
Transceiver and all its timeslots, so using in for the BTS side
does not make any sense (imagine BCCH hopping together with DCCH).
Change-Id: I587e4f5da67c7b7f28e010ed46b24622c31a3fdd
|
|
Based on firmware/layer1/rfch.c:rfch_hop_seq_gen() by Sylvain Munaut.
Change-Id: I9ecabfef6f5a4e4180956c6a019c386ccb1c9acd
|
|
See previous commit, TL;DR this approach is significantly faster.
Change-Id: I5dc0dda89443d2763bfae50cc402724935cc91b3
|
|
This approach is much better than buf.append() in terms of performance.
Consider the following bit conversion benchmark code:
usbits = [random.randint(0, 254) for i in range(GSM_BURST_LEN)]
ubits = [int(b > 128) for b in usbits]
for i in range(100000):
sbits = DATAMSG.usbit2sbit(usbits)
assert(DATAMSG.sbit2usbit(sbits) == usbits)
sbits = DATAMSG.ubit2sbit(ubits)
assert(DATAMSG.sbit2ubit(sbits) == ubits)
=== Before this patch:
59603795 function calls (59603761 primitive calls) in 11.357 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
59200093 3.389 0.000 3.389 0.000 {method 'append' of 'list' objects}
100000 2.212 0.000 3.062 0.000 data_msg.py:191(usbit2sbit)
100000 1.920 0.000 2.762 0.000 data_msg.py:214(sbit2ubit)
100000 1.835 0.000 2.677 0.000 data_msg.py:204(sbit2usbit)
100000 1.760 0.000 2.613 0.000 data_msg.py:224(ubit2sbit)
=== After this patch:
803794 function calls (803760 primitive calls) in 3.547 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
100000 1.284 0.000 1.284 0.000 data_msg.py:203(<listcomp>)
100000 0.864 0.000 0.864 0.000 data_msg.py:193(<listcomp>)
100000 0.523 0.000 0.523 0.000 data_msg.py:198(<listcomp>)
100000 0.500 0.000 0.500 0.000 data_msg.py:208(<listcomp>)
1 0.237 0.237 3.547 3.547 data_msg.py:25(<module>)
100000 0.035 0.000 0.899 0.000 data_msg.py:191(usbit2sbit)
100000 0.035 0.000 0.558 0.000 data_msg.py:196(sbit2usbit)
100000 0.033 0.000 0.533 0.000 data_msg.py:206(ubit2sbit)
100000 0.033 0.000 1.317 0.000 data_msg.py:201(sbit2ubit)
So the new implementation is ~70% faster in this case, and takes
significantly less function calls according to cProfile [1].
[1] https://docs.python.org/3.8/library/profile.html
Change-Id: I01c07160064c8107e5db7d913ac6dec6fc419945
|
|
Change-Id: I16d5190b3cdc997c5609b52d41203f10264b017c
|
|
Change-Id: Ie5d14a261e17af554f7132b03d58549a4831dcdb
|
|
Change-Id: Ied32764cf1c34dc7e0f746f4f085ea20168775cb
|
|
This change implements basic (receive only) support of the PDCH
channels that are used in GPRS. Several coding schemes are
defined by 3GPP TS 45.003, however we can only do CS-1
for now, since it's basically an equivalent of xCCH.
In order to support the other schemes (CS2-4), we would need to
know how to configure the DSP (look at Freecalypso code?).
Change-Id: I44531bbe8743c188cc5d4a6ca2a63000e41d6189
|
|
Change-Id: I16dd29d2f1e14e634029195599fa49a9be9219ab
|
|
Change-Id: If51052af04289f10bfaefd5374049908de05319a
|