A LoRa Mystery – Phantom Packets.

I had noticed  while testing LoRa links at 868Mhz with different receiver types that I was seeing a number of ‘phantom’ packets that I did not recognise. I initally thought these were just LoRa packets from the surrounding environment. 

The receiver I was using was listening on Bandwidth 500Khz and spreading factor 7, which is not the typical packet settings used by LoRa for Internet of things applications. I was using these settings as I needed a fast data rate and only a hundread meters or so of range. These phantom packets being quite short range I thought must be coming from somewhere nearby.

I used some LoRa SD card logging software I had written to check out the details of these phantom packets. I could leave the receiver running for long periods to see how many packets were received. I did fing a bug in the software which explained some odd RSSI and SNR values I had been seeing but I was still curious as to where the packets were coming from.

LoRa Receiver


With the receiver running on the bench with an antenna I was getting around 5 to 10 phantom packets per hour, these were a mixture of packets with valid CRC and those where the CRC check had failed. Packets varied in length from a few bytes up to 200+. The reported RSSI and SNR was always around –110dBm and –10dB respectively which suggested a single source of the packets. 

I put an SMA terminator in place of the antenna and was surprised to see that I got around the same results, 5 to 10 packets per hour and similar RSSI and SNR values.

As a check to see if these phantom packets were actually coming from the great wide world I put my reference 868Mhz antenna (1/4 wave vertical with radials) on top of the 8.5m mast attached to my workshop. I got the same results as I had when the receiver was sat on the bench with an SMA terminator on it. The assumption might be that the source of the phantom packets was therefore very local, interference from my PC or the lights perhaps ? That however would not explain why I origionally seen these packets in the middle of a large field.

For some of the LoRa link testing I had been doing recently I needed to cut the range of the LoRa transmitter (at spreading factor 12) to just 50m or so. Fitting an SMA terminator in place of the antenna was not enough. Even putting the receiver in a die-cast aluminium box was not enough. What did work very well was wrapping the transmitter in aluminium foil. That cut the reception range to 10m or so and it was easy to adjust the range out to 50m by using a pin to put small holes in the aluminium foil.

LoRa Test Transmitter


 Screened Transmitter

If the aluminium foil was so effective at preventing the RF getting out, it ought to be effective at preventing RF getting in. So I wrapped a LoRa receiver in foil and also put it in an   aluminium box. A LoRa transmitter sending packets at 17dBm (50mW) had to be within 5cm of the box for the receiver to pick up the packets, there was a buzzer on the receiver so I could tell when packets were received. So if real world packets were getting through all that shielding they had to be very powerful indeed.

Screened Receiver

With the receiver now wrapped in foil and consequently very well screened I was still receiving phantom packets and with the similar RSSI and SNR values as before. Perhaps the phantom packets were a result of electro magnetic interference (EMI) coming from the receiver itself, but if so how could the CRCs be valid ?

I modified my logger software to put the Arduino Pro Mini to sleep and have it wake up when a packet was received using an interrupt from the LoRa devices DIO0 pin. I even powered down the SD card. Thus the only active electronics running (and thus possibly generating EMI) was the LoRa device itself listening for a packet.

With only the LoRa device itself active and with it heavily screened from the outside world, still the phantom packets appeared.

These phantom packets do seem to be coming from the receiver itself; when I ran two identical receivers next to each other on the bench, each would receive phantom packets but not at the same time. If the source of the packets was external then you might expect at least some duplicate receptions.

This does not appear to be just an issue with the 500khz bandwidth setting, IO have also seen phantom packts when the 125khz bandwidth setting is used.

Maybe the LoRa receiver is fooled into starting packet reception by noise but then why does reception complete with a valid payload CRC and a valid header CRC as well?

Adventures with the ESP32

As part of a project to check if locally generated electromagnetic interference (EMI) would affect LoRa receiver performance at its weak signal limits I needed a ESP32 board that I could plug Mikrobus modules into. Faster microcontrollers, such as the ESP32 are becoming the norm, which is understandable, they are cheap, fast, have lots of memory and in the case of the ESP32, have built in Wifi and Bluetooth. Faster microcontrollers, however, generate more EMI, so I wanted to do a LoRa reception comparison with various microcontrollers. For the comparison to be valid I needed to use the same LoRa module for different controller boards, the easiest way to achieve that was to have the LoRa device on a plug in Mikrobus board, there are some details of Mikrobus modules here;



The RFM98 module above is assembled using my own low cost PCBs, in this case the Mikrobus module PCB is modified from standard to allow access to all the LoRa device DIO pins. The advantage of having a controller board that accepts Mikrobus modules is that it is easy to re-purpose the controller board simply by fitting a different type of Mikrobus module, see the Mikroelectronica link above for the possibilities.

But which ESP32 board to use ?  There are several, the Wemos Lolin 32 seems popular, but I really wanted to be able to plug in a choice of batteries and not be restricted to a Lithium battery. The NodeMcu32s fits the bill, its one of the smaller ESP32 boards and provides access to the voltage(battery) in pin. The board is shown below.


Designing a Mikrobus module PCB for the NodeMcu32s was easy enough, I use Eagle, but what had me stuck was which of the pins of the NodeMcu32s you could you use as inputs, outputs, interrupts or analogue and more importantly which pins did you need to avoid using for fear of interfering with the ESP32 itself or the attached flash.

I decided to design my board layout first and then check that I could get the functions I wanted out of the pins, the functions I needed were;

Outputs, flash an LED, drive a chip select.

Inputs, read a switch or interrupt.

Analogue, read a battery voltage for instance.

Pulse Width Modulation, not needed for my board, but useful for the Mikrobus sockets.

UART transmit and receive, two channels.

I2C, read and write to a FRAM and SSD1306 OLED display

Micro SD card read and write.

LoRa device, read and write

So as well as checking the basic pin functions, I needed test programs for each of the above to confirm it would all work in practice. After a couple of days of testing and checking I worked out which pins you can use on the NodeMcu32s and which ones to avoid, they are listed below, with comments against each pin.


0 Input, shared with the IO0 switch on the board, this switch needs to be pressed, pin held low, to download a program

1 TX output from serial console, I made no connection to this pin

2 Output, shared with the LED on the board

3 RX input to serial console, I made no connection to this pin

4 Analogue, used to read battery level

5 Output, SPI SS, used for chip select on LoRa device

6 Could not use, appears to be used by flash

7 Could not use, appears to be used by flash

8 Could not use, appears to be used by flash

9 Could not use, appears to be used by flash

10 Could not use, appears to be used by flash

11 Could not use, appears to be used by flash

12 Input, can be used as a switch input, but be sure to leave it floating during programming. There is a pull up to VCC on the board and when this pin is high the correct voltage for the Flash is selected. If the pin is low programming will fail

13 Input, Serial1 RX, you can select any of the available input pins for the RX when defining the Serial1 instance

14 Input, Serial1 TX, you can select any of the available input pins for the TX when defining the Serial1 instance

15 Input and Output, LoRa device DIO2 interrupt input or PWM output

16 Input, Serial2 RX, you can select any of the available input pins for the RX when defining the Serial2 instance

17 Output, Serial2 TX, you can select any of the available input pins for the TX when defining the Serial2 instance

18 Output, SPI SCK, used for the LoRa device and SD card

19 Input, SPI MISO, used for the LoRa device and SD card

21 SDA, used for the SSD1306 display and I2C FRAM

21 SCL, used for the SSD1306 display and I2C FRAM

23 Output, SPI MOSI, used for the LoRa device and SD card

25 Output, used for chip select on SD card

26 Analogue, used to read battery level

27 Output, used for the RST on the LoRa device

32 Output, PWM

33 Input only, used for used for reading external switch

34 Input only, needs external pull-up, used for reading DIO1 on LoRa device

35 Input only, needs external pull-up, used for reading DIO0 on LoRa device

36 Input only, needs external pull-up, used for reading DIO3 on LoRa device

39 Input only, needs external pull-up, used for reading external switch

EN Input, External reset, pull to ground for reset.

VIN External power input. Intended supply is 4 x AA Alkaline or NiMh batteries.

It is not clear why pins 6 to 11 are provided on the board, you do not appear to be able to use them. These pins are not present on some other ESP32 boards such as the Wemos Lolin32.

Although the pin mappings above are the defaults for the NodeMcu32 board you can map the pins for SPI, I2C, UART or PWM to any available pins, although not 6,7,8,9,10,11,12. I have also not tried redirecting these interfaces to 34,35,36,37.

There are some capacitors associated with pins 36 (VP) and 39 (VN) but they did not appear to interfere with the relatively slow interrupts you will get from a LoRa device for instance.

Program space

I have a program that I use for LoRa link testing, it receives LoRa packets and logs the SNR, RSSI and contents to the Serial monitor and SD card. When compiled for an Arduino Pro Mini the program consumes this much space;

Sketch uses 19562 bytes (63%) of program storage space. Maximum is 30720 bytes.
Global variables use 1292 bytes (63%) of dynamic memory, leaving 756 bytes for local variables. Maximum is 2048 bytes

For the NodeMcu32s it consumes this amount of space;

Sketch uses 159701 bytes (12%) of program storage space. Maximum is 1310720 bytes.
Global variables use 12348 bytes (4%) of dynamic memory, leaving 282564 bytes for local variables. Maximum is 294912 bytes.

So although the NodeMcu32s reports as having 1,310,720 bytes of available Flash, compared to 30,720 for the Pro Mini, compiled ESP32 programs consume that space at around 8 times the rate for an ATMega328 program, so you don’t have as much useable program space as you might think.

Lost in a (wet or dry) Forest

So just how far will LoRa transmissions go in a forest that is wet?

I decided to find out.

Local to me is a park that has in the middle of it an area of forest and undergrowth around 300M x 200m. I wanted a location where I could create a reference test, so at one end of the forest, the left mark in the map below, I placed a stick on the ground in the undergrowth and held it in place with pegs. I would use this to line up the tip of the antenna on the test transmitter. Thus in subsequent tests the antenna would be in the same location.

Wet or dry forest pic1

On the map the test route distance between the two marks is 200m. The forest is fairly overgrown with a lot of low level vegetation. Below are some pictures of the ‘forest’ so you can appreciate the depth of the undergrowth. The first picture below is from the transmitter location on the left of the map looking towards the receiver location on the right.

Wet or dry forest pic2

The second picture, below, is from the receiver looking towards the transmitter.

Wet or dry forest pic3

This third picture is the test transmitter, an Arduino Pro Mini and RFM98 LoRa device placed in a metal box. A ¼ wave wire antenna is used for the transmitter and receiver. The transmitter is shown bellow with a 10dB attenuator in place to reduce the signal from the antenna to both legal levels, 10dBm in the UK, and to provide a practical range for the receiver. The test transmitter is on the ground amongst the vegetation (worst case location perhaps). The receiver was held in my hand with the tip of the ¼ wave wire antenna at around 1.5M from the ground.

Wet or dry forest pic4

The test method involves sending a series of packets at descending power, 17dBm to 2dBm. The power level used to send the packet is part of the data in the packet. The receiver can then tell the power used to send the packets it is receiving. So if the receiver stops receiving packets at 10dBm, you have quantified how much power is needed to cover the distance.

The LoRa settings were, 434.4Mhz, bandwidth 62.5khz, spreading factor 8, code rate 4:5, explicit mode. This represents a data rate of 1562bps.

At the time of the test its had been raining overnight and for most of the morning. It was raining heavily during the test, the forest was very wet. The test transmitter had a 10dB attenuator in place.

Packets sent at 6dBm power were the lowest received, so taking account of the 10dB attenuator, the packets were in effect being transmitted at -4dBm or 0.4mW.

The first test was with the forest fairly wet and a couple of weeks later it had been dry for several days so I repeated the test with the same equipment with the antennas in the same position and location. This time the packets were just received at -9dB, so the dry forest was 5dB better than the wet. Or more simply when the forest was wet, signals were reduced by around 5dB.

So big question, if the transmissions had been at 10mW (normal ISM band limit) then how far might the signals propagate in worst case (wet) conditions?

If packets are sent at 10dBm but -4dBm is enough to cover the 200m then there is a link margin at 200m of 14dBm. Thus if the forest is assumed to be uniform an extra 14dB on link gain should increase the distance by 5 times, or to 1000m.

If the full 17dBm (50mW) of the transmitter was used the distance covered could increase to 2200m.

There is more, for this test the signals were being sent at a LoRa data rate of 1562bps, if the data rate was lowered to 150bps, which is fine for a lot of data gathering and control applications, range would increase by a further 3 times when using the same power.

The first LoRa received from Space !

During the The Things Network (TTN) conference in Amsterdam recently Thomas Telkamp arranged to borrow some time on the maritime satellite NORSAT2 and used it’s SDR to transmit LoRa as the satellite passed over Amsterdam.

The LoRa was transmitted on 162Mhz at 26dBm (400mW) into an approximate 6dB gain steerable yagi which was pointed to the horizon and at Amsterdam during the pass, data rate was 292bps.

The TTN receiver was a standard Microchip LoRa 169Mhz device with a simple antenna that was placed on the roof of a two storey building in the industrial area just to the North West of Amsterdam center. The LoRa was received as soon as the satellite cleared the horizon, a range of 2763km.

There was a challenge (prize) to be the first one to receive the LoRa during the conference. One of my own very simple handheld Arduino Pro Mini receivers fitted with a 433Mhz LoRa module (i.e. the wrong one for the frequency) and a telescopic whip antenna was good enough to receive the first LoRa on 162Mhz. The insertion loss of 162mhz into a 433Mhz receiever had been estimated by Thomas at around 20dB!  The receiver was just left propped against the window of my hotel bedroom . I received packets close to the limit of reception with an SNR of -18dB.


LoRa from Space


The Video of the ‘LoRa from Space’ presentation can be viewed here;



The problem with co-ax cable connections to antennas

I have been experimenting and testing some simple antennas for 868Mhz, primarily for Internet of Things (IOT) applications.

A dipole used vertically has around the same performance as a 1/4 wave vertical with 4 x 1/4 radials, but the dipole is easier to build. In the test setup shown below I am using a LoRa module as a transmitter and measuring the RF field strength it produces at some distance.

Dipole and Co-ax 1The dipole is built on a BNC chassis socket and connected to the radio module module is mounted direct onto the antenna using some adapter, I did not quite have the right one so there is a BNC plug to plug in there. I will re-test when I have the right adapter, but the objective of the test was really to see what we can do to reduce losses when a co-ax cable is used to connect the antenna.

Mounting the module direct on the antenna like this is often not going to be convenient especially if the antenna is going outside.

Dipole and Co-ax 2So what happens when a length of co-ax is used to connect the antenna as shown in the second picture ?

The co-ax cable (which is 0.5M long) outer now becomes part of the antenna and acts to detune it. Compared to the setup in the first picture the co-ax causes the antenna output to drop by circa 5dBm, a significant loss. Whether this can be mitigated by retuning the antenna will be the subject of another article.

Dipole and Co-ax 3

So we are using the co-ax but the antenna performance drops significantly, what do we do about it ? One simple option is to use a clamp on choke of the type you see on monitor cables or power supplies.

The ferrite choke acts to stop RF travelling down the co-ax so it does not detune the antenna so much. The test result was that the transmitter was only putting out 1dBm less than in the first picture, which is a significant improvement over the 5dBm loss without the choke in place.


will be checking some other clamp on chokes to see if the same results occur with low cost chokes and longer cables too.

How to use low bandwidths with the LoRaTracker programs.

These comments relate specifically to my own tracker programs which can be found here;


Using a low LoRa bandwidth can improve the range\distance you can achieve. You can do much the same by increasing spreading factor. Range improvements have a cost, the packets take longer to transmit. At bandwidths above 25kHz local regulations may restrict you to 10% duty cycle, so a long range low data rate packet that takes maybe 5 seconds to send can only be sent every 50 seconds or so.

With a low LoRa bandwidth packet you can likely use greater duty cycles so you may be able to send the packet continuously, although do check your local regulations.

However, at the lowest possible LoRa bandwidth, 7.8kHz, the LoRa transmitter and receiver are very likely to be too far apart in frequency for packet reception to initially work and some combinations of modules may not work together even at 41.7khz. Once we have packet reception actually taking place at a low bandwidth we can use automatic frequency control (AFC) functionality to keep the receiver within a few hundred hertz of the transmitter, even if the transmitter frequency changes due to temperature shifts etc.

The LoRaTracker receiver programs can deal with this issue, more or less automatically, there is no need for manual adjustments of receiver or remote adjustments of the transmitter.

The HAB2 LoRaTracker program has a bind functionality, the transmitter sends a bind packet at start-up. The receiver can be put into bind mode and will pick up and use the LoRa settings of the transmitter. A consequence of this bind is that the receiver will also adjust to the frequency of the transmitter. However the bind functionality is kept short range on purpose, so if your receiver is some distance away, or is started up after the transmitter powers up, you wont be able to use the bind to frequency lock your transmitter and receiver.

By default the HAB2 tracker program puts out a binary location only packet (the search mode packet) at bandwidth 62.5kHz spreading factor 12 (SF12). This packet is only 12 bytes long so can be sent fairly often even at this ‘high’ bandwidth. Lets imagine the tracker mode packet (HAB payload) has been set to use a bandwidth of 7.8kHz SF8. If the transmitter and receiver are approximately 2kHz apart in frequency packet reception will not work.

The 62.5kHz bandwidth search mode packet has a much greater capture range of 15khz, so its unlikely transmitter and receiver will be far enough apart to stop packet reception working.

All we need to do is temporarily switch the receiver over to search mode, wait for it to receive a packet, then swap back to tracker mode. The reception of the search mode will change the receiver frequency to match the transmitter and the two should then stay locked together.

If your not using the search mode packet you could switch the receiver across to command mode listen and achieve the same automatic set-up.

The changes to the LoRa tracker programs to allow this to work are in this program;


To be found here;


You will also need the updated LoRaTracker library from here;


It would be straight forward to automate the use of the low bandwidths, just send out a regular very short higher bandwidth SF12 packet to act as an AFC calibrator. The receiver initially listens for this packet, does the AFC when its received and switches across to the low bandwidth mode.

The impact of bandwidth on LoRa sensitivity.

A LoRa device provides several options if you want to improve the reception range of the packets you are sending. All of these ‘improvements’ apart from increasing transmitter power result in slower packets that take longer to send.

The options are;

 Increase transmitter power

Increase the spreading factor (SF)

Reduce the bandwidth

Increase the coding rate

What I want to measure is the real world effect of reducing the bandwidth.

For a LoRa receiver to detect (capture) a packet, the transmitter and receiver centre frequencies need to be within 25% of the bandwidth of each other, that’s according to the data sheet. Myself I would err on the cautious side and plan for 20%, which I have found is closer to real world conditions.

So what does the 25% capture range mean?

Assume the transmitter has a centre frequency of 434Mhz and we are using a LoRa bandwidth of 500kHz. 25% of this is 125kHz, so as long as the receiver centre frequency is +/- 125kHz of 434Mhz, then the receiver should pick up the packets. Thus the receiver centre frequency need to be within the range of 433.875Mhz to 434.125Mkhz.

A lot of LoRa modules do not use temperature compensated crystal oscillators for their reference frequency but use low cost plain crystals instead, this helps to keep the cost of LoRa modules low. In practice I have seen 434Mhz LoRa modules at most +/- 5khz from 434Mhz, which is not bad for low cost crystals.

However one module might be up to 5kHz high, and another might be 5khz low, so the difference between two modules could be as high as 10kHz. Using the 25% of bandwidth rule you could have a situation where out of the box two LoRa modules might not communicate with each other at 41.7Kz bandwidth, and this has happened to some of the modules I have used. Its for this reason, and to significantly improve interoperability of devices, that The Things Network (TTN) and LoRaWAN typically use a bandwidth of 125kHz, it’s then unlikely that two devices will be far enough apart in frequency that communications fail.

Calibrating LoRa modules with a compensation factor, so that variations in centre frequency are eliminated, for a particular temperature, can allow lower bandwidths to be used. You do though need a frequency counter or similar to measure the centre frequency of each individual module.

Fortunately the LoRa device will, on reception of a packet, calculate an estimate of the frequency error between transmitter and receiver so it is possible for the receiver to track any transmitter frequency changes due to temperature shifts even at low bandwidths. Although do note that this AFC functionality will only work once packet reception has started ……….

Enough preamble ….. what I really wanted to measure was how much sensitivity improved as the bandwidth was reduced with the spreading factor kept the same. The table below shows the quoted sensitivity in dBm for a LoRa device and the data rate for the range of spreading factors and bandwidths.

Bandwidth vs Spreading Factor

In the test I carried out, I was using a spreading factor of 8 and a bandwidth of 7.8kHz versus 62.5kHz, the table shows that at 62.5kHz the sensitivity should be -128dBm improving to -139dBm at a bandwidth of 7.8kHz. The test I would use would not measure sensitivity itself, but rather how much transmit power was required to cover a link for each of the bandwidths. The quoted data sheet sensitivity is not so important, but how much power you actually need to cover a particular distance is.

This descending power method of testing I use is described in more detail here;


I set up my ‘typical’ 1km across town link so that at the bandwidth of 62.5kHz SF8, packet reception stopped when the transmitter power was around 14dBm. The transmitter antenna needed a 20dB attenuator fitted to achieve this, so in this case the 1Km was achieved with -6dBm of actual transmitted signal.

The test software sent 62.5kHz bandwidth test packets at 17dBm down to 2dBm in 1dBm steps, then flipped across to a bandwidth of 7.8khz and did the same. Automatic frequency control operated at the receiver and kept it within 250Hz or so of the transmitter, and gave reliable reception at the lower bandwidth.

The results are shown in the graph below, it shows how many packets of a particular power were received;

SF8 Bandwidth Comparison

The data sheet predicted a 10dBm improvement as the bandwidth changed from 62.5khz down to 7.8kHz and that is close to what the graph shows. Note the data rate dropped from 1563bps to 195bps. The 10dB improvement would represent a range improvement of around 3 times.

So what’s the point of low bandwidth?

An interesting question.

With bandwidths lower than 62.5kHz, you cannot assume that a transmitter and receiver will work together, due to changes in centre frequency. Calibration can help, but if one end of the link is very cold then the frequency could shift enough to stop the initial reception, so why bother?

There are two basic ways to improve LoRa link performance or range. First is to use a higher bandwidth and spreading factor, the other is to use a lower bandwidth and spreading factor.

Starting from a 62.5kHz bandwidth SF8 set-up you get around the same increase in performance and data rate by changing to SF12 as you would by staying at SF8 and reducing the bandwidth to 7.8kHz.

In a lot of places around the world, if you keep below a 25kHz channel you can use up to 100% duty cycle, I.e transmit continuously. Above 25kHz channels, or LoRa bandwidths above 20.8kHz, duty cycle may be limited by regulation to 10%. SF12 at a higher bandwidth might give you the same range as a lower bandwidth and spreading factor combination, but you may be significantly restricted by duty cycle as to how much data you can transmit in a given period.

So if you want to send a lot of data, that needs more than a 10% duty cycle it makes sense to use a lower bandwidth and spreading factor combination, if you can overcome the frequency capture issues mentioned above.

LoRa Signal Quality – RSSI or SNR?

It’s helpful to know how good the LoRa signals you are receiving are, or perhaps more importantly how close the are to failure.

The LoRa device measures two things during packet reception, the received signal strength indicator (RSSI) measured in dBm, and the signal to noise ratio (SNR) measured in dB.


From practical experiments I have observed that as the reported SNR approaches the limit specified for the spreading factor (SF8) then the packet reception will start to fail.

For instance at SF8 the SNR limit is -10dB. If your transmitting packets and they are being received at SNR of -0dB and you then reduce transmit power by 8 or 9dBm, the reported SNR will drop to around -9dB and packet reception starts to fail. I have found the reported SNR a very good indication of approaching reception failure.


Under very good reception conditions, with strong signals, the reported SNR rarely goes above 8dB, even as signals get stronger. So for very strong signals SNR is not a good indicator of signal quality.

I decided to plot the results of a large variety of reception conditions, at SF7 bandwidth 125000hz. The signals varied from very strong, the transmitter and receiver close together, to very weak signals where packet reception was failing. I plotted the results, see the graph below.

SNR versus RSSI Graph

You can see the spread of reported RSSI for SNRs of 10,9 and 8, the range is -45dBm to -105dBm. So clearly SNR is not a lot of help for giving a quality indication for very strong signals.

But then look what happens as the SNR drops, at the failure point, -8dB SNR and below, the RSSI varies from -100dBm to -120dBm.

For an overall quality indicator, perhaps a compromise is needed. If the SNR is say 7dB or higher, use the RSSI figure, and if SNR is lower than 7dB, use the SNR.

Just a thought.

Ping testing the Heltec Wi-Fi LoRa OLED device

This Heltec device looks very promising, it’s and ESP32 module with built in LoRa device Wi-Fi, Bluetooth and a OLED display to boot!

Heltec Module

I have tested some other micro controller modules where the faster processor produces enough electromagnetic interference (EMI) to reduce the LoRa sensitivity. This problem does not affect installations where the LoRa device antenna is remote to the micro controller, such as with a roof mounted antenna, but there is an effect if the LoRa antenna is local as it will be for a portable device or a small sensor node.

With some simple equipment it is not difficult to test for any adverse affect, and we can measure this effect in dBm.

To carry out a ping test you need a LoRa transmitter that repeatedly sends packets at the frequency, bandwidth, spreading factor and coding rate that you specify. The test is best performed in a large open field, to reduce the impact of external interference, from PCs, lights, Wi-Fi and reflections from nearby objects.

The ping test requires that you reduce the reception range of the packets to around 100m or less. This can be achieved by shielding the transmitter in a metal box and using attenuators in series with the antenna to significantly reduce the transmitted signal level . Even a SMA terminator may radiate enough signal to work. For the Heltec device comparison the transmitter, an Arduino ATMega Pro Mini and Hope RFM98 LoRa device were placed in a tin on a pole in the middle of the field.

Below, left to right, transmitter, ATMega receiver and Heltec receiver.

TransmitterATMega ReceiverHeltec Receiver

The receiver used to set-up the test is also an Arduino Pro Mini and Dorji DRF1278F LoRa device. It is set-up with the same LoRa parameters and has an LED and buzzer which activate whenever a packet is received. The transmitter is sending packets every second or so.

With the receiver in your hand you walk away from the transmitter until the LED\Buzzer stops. If you get 100m and your still receiving packets, you need to attenuate the transmitter some more and try again.

Lets imagine we have now set it up so the packets stop being received at 100m when the. Arduino Pro Mini receiver is used.

We then repeat the test, either with a different type of receiver, or perhaps a different transmitter or receiver antenna. This time the LED\Buzzer stops at 50m. So the reception distance in the first test is twice the second, and we know that twice the distance requires 4 times or 6dBm more power.

Thus by simple distance measurement we can conclude that the antenna (or receiver) used in the first test is 6dBm better than in the second. This technique for making dB measurements works if only one antenna is changed at a time, the receiver antenna is held at the same height and orientation for both tests, and the area for testing is large enough so that reflections from near objects are minimal.

Over to the local park I went and set-up the test as described above, the transmitter with an SMA terminator fitted and the lid missing from the box gave me 108m. LoRa settings were 434.4Mhz, bandwidth 125000, spreading factor 8, coding rate 4:5.

I then changes to the Heltec Wi-Fi LoRa receiver and repeated the test, I received packets up to 70m. Thus the Heltec board had circa 3.5dBm worse receiver performance.

Now although the test showed the difference between two receivers, its possible that the LoRa device on the Heltec board was below par. You could of course test a number of modules to get an average result, but that could get expensive. There is an alternative however, use the same LoRa device to test first with the ATMega receiver and then with the Heltec ESP32 board. When I have suitable breadboards to build these receivers, I will repeat the test.

Reducing project sleep mode current to 0.001uA

The nanoPower

(A Very low current sleep mode power controller)


This small PCB is designed for putting battery powered micro controller projects into extreme low current sleep mode and turn them back on again up to one month later.

With care an original projects design may allow it to get current consumption down to 10uA or perhaps a bit less in sleep mode. If a project has been designed without low power sleep mode in mind then a complete redesign is likely not an option.

The nanoPower can be used with most battery powered micro controller projects, old or new without changing the design. The nanoPower will reduce battery drain in sleep mode to almost unmeasurable figures, to less than 0.001uA.

The nanoPower is suitable for projects that are battery powered at up to 6V and connects between the battery itself and the projects battery input. There is reverse battery and over current protection. A 4 pin cable is required to connect the nanoPower to the projects I2C interface, 5v or 3.3v, for control. Simple software on the project configures the real time clock (RTC) on the nanoPower. Any sleep time between 1 second and one month can be selected.

On a once per hour wakeup the RTCs lithium backup battery should keep the nanoPower going for a year. There is a power switch on the board so that you can be sure to disconnect the project from the battery if need be.

To use the nanoPower connect it between battery and project as shown in the picture bellow and turn the power slide switch on (up).


To first use the nanoPower with a project you need to press and hold down the push button, left in the picture. This forces the power on. Keep the button pressed until your initial program has been loaded and run.

Any project using the nanoPower does needs as the first instructions in its program after reset or power up to write to the RTC and configure it to keep the power on.

Sample code has been tested for Arduino and Micropython.