Lost in a (wet or dry) Forest

So just how far will LoRa transmissions go in a forest that is wet?

I decided to find out.

Local to me is a park that has in the middle of it an area of forest and undergrowth around 300M x 200m. I wanted a location where I could create a reference test, so at one end of the forest, the left mark in the map below, I placed a stick on the ground in the undergrowth and held it in place with pegs. I would use this to line up the tip of the antenna on the test transmitter. Thus in subsequent tests the antenna would be in the same location.

Wet or dry forest pic1

On the map the test route distance between the two marks is 200m. The forest is fairly overgrown with a lot of low level vegetation. Below are some pictures of the ‘forest’ so you can appreciate the depth of the undergrowth. The first picture below is from the transmitter location on the left of the map looking towards the receiver location on the right.

Wet or dry forest pic2

The second picture, below, is from the receiver looking towards the transmitter.

Wet or dry forest pic3

This third picture is the test transmitter, an Arduino Pro Mini and RFM98 LoRa device placed in a metal box. A ¼ wave wire antenna is used for the transmitter and receiver. The transmitter is shown bellow with a 10dB attenuator in place to reduce the signal from the antenna to both legal levels, 10dBm in the UK, and to provide a practical range for the receiver. The test transmitter is on the ground amongst the vegetation (worst case location perhaps). The receiver was held in my hand with the tip of the ¼ wave wire antenna at around 1.5M from the ground.

Wet or dry forest pic4

The test method involves sending a series of packets at descending power, 17dBm to 2dBm. The power level used to send the packet is part of the data in the packet. The receiver can then tell the power used to send the packets it is receiving. So if the receiver stops receiving packets at 10dBm, you have quantified how much power is needed to cover the distance.

The LoRa settings were, 434.4Mhz, bandwidth 62.5khz, spreading factor 8, code rate 4:5, explicit mode. This represents a data rate of 1562bps.

At the time of the test its had been raining overnight and for most of the morning. It was raining heavily during the test, the forest was very wet. The test transmitter had a 10dB attenuator in place.

Packets sent at 6dBm power were the lowest received, so taking account of the 10dB attenuator, the packets were in effect being transmitted at -4dBm or 0.4mW.

The first test was with the forest fairly wet and a couple of weeks later it had been dry for several days so I repeated the test with the same equipment with the antennas in the same position and location. This time the packets were just received at -9dB, so the dry forest was 5dB better than the wet. Or more simply when the forest was wet, signals were reduced by around 5dB.

So big question, if the transmissions had been at 10mW (normal ISM band limit) then how far might the signals propagate in worst case (wet) conditions?

If packets are sent at 10dBm but -4dBm is enough to cover the 200m then there is a link margin at 200m of 14dBm. Thus if the forest is assumed to be uniform an extra 14dB on link gain should increase the distance by 5 times, or to 1000m.

If the full 17dBm (50mW) of the transmitter was used the distance covered could increase to 2200m.

There is more, for this test the signals were being sent at a LoRa data rate of 1562bps, if the data rate was lowered to 150bps, which is fine for a lot of data gathering and control applications, range would increase by a further 3 times when using the same power.

The first LoRa received from Space !

During the The Things Network (TTN) conference in Amsterdam recently Thomas Telkamp arranged to borrow some time on the maritime satellite NORSAT2 and used it’s SDR to transmit LoRa as the satellite passed over Amsterdam.

The LoRa was transmitted on 162Mhz at 26dBm (400mW) into an approximate 6dB gain steerable yagi which was pointed to the horizon and at Amsterdam during the pass, data rate was 292bps.

The TTN receiver was a standard Microchip LoRa 169Mhz device with a simple antenna that was placed on the roof of a two storey building in the industrial area just to the North West of Amsterdam center. The LoRa was received as soon as the satellite cleared the horizon, a range of 2763km.

There was a challenge (prize) to be the first one to receive the LoRa during the conference. One of my own very simple handheld Arduino Pro Mini receivers fitted with a 433Mhz LoRa module (i.e. the wrong one for the frequency) and a telescopic whip antenna was good enough to receive the first LoRa on 162Mhz. The insertion loss of 162mhz into a 433Mhz receiever had been estimated by Thomas at around 20dB!  The receiver was just left propped against the window of my hotel bedroom . I received packets close to the limit of reception with an SNR of -18dB.


LoRa from Space


The Video of the ‘LoRa from Space’ presentation can be viewed here;



The problem with co-ax cable connections to antennas

I have been experimenting and testing some simple antennas for 868Mhz, primarily for Internet of Things (IOT) applications.

A dipole used vertically has around the same performance as a 1/4 wave vertical with 4 x 1/4 radials, but the dipole is easier to build. In the test setup shown below I am using a LoRa module as a transmitter and measuring the RF field strength it produces at some distance.

Dipole and Co-ax 1The dipole is built on a BNC chassis socket and connected to the radio module module is mounted direct onto the antenna using some adapter, I did not quite have the right one so there is a BNC plug to plug in there. I will re-test when I have the right adapter, but the objective of the test was really to see what we can do to reduce losses when a co-ax cable is used to connect the antenna.

Mounting the module direct on the antenna like this is often not going to be convenient especially if the antenna is going outside.

Dipole and Co-ax 2So what happens when a length of co-ax is used to connect the antenna as shown in the second picture ?

The co-ax cable (which is 0.5M long) outer now becomes part of the antenna and acts to detune it. Compared to the setup in the first picture the co-ax causes the antenna output to drop by circa 5dBm, a significant loss. Whether this can be mitigated by retuning the antenna will be the subject of another article.

Dipole and Co-ax 3

So we are using the co-ax but the antenna performance drops significantly, what do we do about it ? One simple option is to use a clamp on choke of the type you see on monitor cables or power supplies.

The ferrite choke acts to stop RF travelling down the co-ax so it does not detune the antenna so much. The test result was that the transmitter was only putting out 1dBm less than in the first picture, which is a significant improvement over the 5dBm loss without the choke in place.


will be checking some other clamp on chokes to see if the same results occur with low cost chokes and longer cables too.

How to use low bandwidths with the LoRaTracker programs.

These comments relate specifically to my own tracker programs which can be found here;


Using a low LoRa bandwidth can improve the range\distance you can achieve. You can do much the same by increasing spreading factor. Range improvements have a cost, the packets take longer to transmit. At bandwidths above 25kHz local regulations may restrict you to 10% duty cycle, so a long range low data rate packet that takes maybe 5 seconds to send can only be sent every 50 seconds or so.

With a low LoRa bandwidth packet you can likely use greater duty cycles so you may be able to send the packet continuously, although do check your local regulations.

However, at the lowest possible LoRa bandwidth, 7.8kHz, the LoRa transmitter and receiver are very likely to be too far apart in frequency for packet reception to initially work. Once we have packet reception actually taking place at a low bandwidth we can use automatic frequency control (AFC) functionality to keep the receiver within a few hundred hertz of the transmitter, even if the transmitter frequency changes due to temperature shifts etc.

The LoRaTracker receiver programs can deal with this issue, more or less automatically, there is no need for manual adjustments of receiver or remote adjustments of the transmitter.

The HAB2 LoRaTracker program has a bind functionality, the transmitter sends a bind packet at start-up. The receiver can be put into bind mode and will pick up and use the LoRa settings of the transmitter. A consequence of this bind is that the receiver will also adjust to the frequency of the transmitter. However the bind functionality is kept short range on purpose, so if your receiver is some distance away, or is started up after the transmitter powers up, you wont be able to use the bind to frequency lock your transmitter and receiver.

By default the HAB2 tracker program puts out a binary location only packet (the search mode packet) at bandwidth 62.5kHz spreading factor 12 (SF12). This packet is only 12 bytes long so can be sent fairly often even at this ‘high’ bandwidth. Lets imagine the tracker mode packet (HAB payload) has been set to use a bandwidth of 7.8kHz SF8. If the transmitter and receiver are approximately 2kHz apart in frequency packet reception will not work.

The 62.5kHz bandwidth search mode packet has a much greater capture range of 15khz, so its unlikely transmitter and receiver will be far enough apart to stop packet reception working.

All we need to do is temporarily switch the receiver over to search mode, wait for it to receive a packet, then swap back to tracker mode. The reception of the search mode will change the receiver frequency to match the transmitter and the two should then stay locked together.

If your not using the search mode packet you could switch the receiver across to command mode listen and achieve the same automatic set-up.

The changes to the LoRa tracker programs to allow this to work are in this program;


To be found here;


You will also need the updated LoRaTracker library from here;


It would be straight forward to automate the use of the low bandwidths, just send out a regular very short higher bandwidth SF12 packet to act as an AFC calibrator. The receiver initially listens for this packet, does the AFC when its received and switches across to the low bandwidth mode.

The impact of bandwidth on LoRa sensitivity.

A LoRa device provides several options if you want to improve the reception range of the packets you are sending. All of these ‘improvements’ apart from increasing transmitter power result in slower packets that take longer to send.

The options are;

 Increase transmitter power

Increase the spreading factor (SF)

Reduce the bandwidth

Increase the coding rate

What I want to measure is the real world effect of reducing the bandwidth.

For a LoRa receiver to detect (capture) a packet, the transmitter and receiver centre frequencies need to be within 25% of the bandwidth of each other, that’s according to the data sheet. Myself I would err on the cautious side and plan for 20%, which I have found is closer to real world conditions.

So what does the 25% capture range mean?

Assume the transmitter has a centre frequency of 434Mhz and we are using a LoRa bandwidth of 500kHz. 25% of this is 125kHz, so as long as the receiver centre frequency is +/- 125kHz of 434Mhz, then the receiver should pick up the packets. Thus the receiver centre frequency need to be within the range of 433.875Mhz to 434.125Mkhz.

A lot of LoRa modules do not use temperature compensated crystal oscillators for their reference frequency but use low cost plain crystals instead, this helps to keep the cost of LoRa modules low. In practice I have seen 434Mhz LoRa modules at most +/- 5khz from 434Mhz, which is not bad for low cost crystals.

However one module might be up to 5kHz high, and another might be 5khz low, so the difference between two modules could be as high as 10kHz. Using the 25% of bandwidth rule you could have a situation where out of the box two LoRa modules might not communicate with each other at 41.7Kz bandwidth, and this has happened to some of the modules I have used. Its for this reason, and to significantly improve interoperability of devices, that The Things Network (TTN) and LoRaWAN typically use a bandwidth of 125kHz, it’s then unlikely that two devices will be far enough apart in frequency that communications fail.

Calibrating LoRa modules with a compensation factor, so that variations in centre frequency are eliminated, for a particular temperature, can allow lower bandwidths to be used. You do though need a frequency counter or similar to measure the centre frequency of each individual module.

Fortunately the LoRa device will, on reception of a packet, calculate an estimate of the frequency error between transmitter and receiver so it is possible for the receiver to track any transmitter frequency changes due to temperature shifts even at low bandwidths. Although do note that this AFC functionality will only work once packet reception has started ……….

Enough preamble ….. what I really wanted to measure was how much sensitivity improved as the bandwidth was reduced with the spreading factor kept the same. The table below shows the quoted sensitivity in dBm for a LoRa device and the data rate for the range of spreading factors and bandwidths.

Bandwidth vs Spreading Factor

In the test I carried out, I was using a spreading factor of 8 and a bandwidth of 7.8kHz versus 62.5kHz, the table shows that at 62.5kHz the sensitivity should be -128dBm improving to -139dBm at a bandwidth of 7.8kHz. The test I would use would not measure sensitivity itself, but rather how much transmit power was required to cover a link for each of the bandwidths. The quoted data sheet sensitivity is not so important, but how much power you actually need to cover a particular distance is.

This descending power method of testing I use is described in more detail here;


I set up my ‘typical’ 1km across town link so that at the bandwidth of 62.5kHz SF8, packet reception stopped when the transmitter power was around 14dBm. The transmitter antenna needed a 20dB attenuator fitted to achieve this, so in this case the 1Km was achieved with -6dBm of actual transmitted signal.

The test software sent 62.5kHz bandwidth test packets at 17dBm down to 2dBm in 1dBm steps, then flipped across to a bandwidth of 7.8khz and did the same. Automatic frequency control operated at the receiver and kept it within 250Hz or so of the transmitter, and gave reliable reception at the lower bandwidth.

The results are shown in the graph below, it shows how many packets of a particular power were received;

SF8 Bandwidth Comparison

The data sheet predicted a 10dBm improvement as the bandwidth changed from 62.5khz down to 7.8kHz and that is close to what the graph shows. Note the data rate dropped from 1563bps to 195bps. The 10dB improvement would represent a range improvement of around 3 times.

So what’s the point of low bandwidth?

An interesting question.

With bandwidths lower than 62.5kHz, you cannot assume that a transmitter and receiver will work together, due to changes in centre frequency. Calibration can help, but if one end of the link is very cold then the frequency could shift enough to stop the initial reception, so why bother?

There are two basic ways to improve LoRa link performance or range. First is to use a higher bandwidth and spreading factor, the other is to use a lower bandwidth and spreading factor.

Starting from a 62.5kHz bandwidth SF8 set-up you get around the same increase in performance and data rate by changing to SF12 as you would by staying at SF8 and reducing the bandwidth to 7.8kHz.

In a lot of places around the world, if you keep below a 25kHz channel you can use up to 100% duty cycle, I.e transmit continuously. Above 25kHz channels, or LoRa bandwidths above 20.8kHz, duty cycle may be limited by regulation to 10%. SF12 at a higher bandwidth might give you the same range as a lower bandwidth and spreading factor combination, but you may be significantly restricted by duty cycle as to how much data you can transmit in a given period.

So if you want to send a lot of data, that needs more than a 10% duty cycle it makes sense to use a lower bandwidth and spreading factor combination, if you can overcome the frequency capture issues mentioned above.

LoRa Signal Quality – RSSI or SNR?

It’s helpful to know how good the LoRa signals you are receiving are, or perhaps more importantly how close the are to failure.

The LoRa device measures two things during packet reception, the received signal strength indicator (RSSI) measured in dBm, and the signal to noise ratio (SNR) measured in dB.


From practical experiments I have observed that as the reported SNR approaches the limit specified for the spreading factor (SF8) then the packet reception will start to fail.

For instance at SF8 the SNR limit is -10dB. If your transmitting packets and they are being received at SNR of -0dB and you then reduce transmit power by 8 or 9dBm, the reported SNR will drop to around -9dB and packet reception starts to fail. I have found the reported SNR a very good indication of approaching reception failure.


Under very good reception conditions, with strong signals, the reported SNR rarely goes above 8dB, even as signals get stronger. So for very strong signals SNR is not a good indicator of signal quality.

I decided to plot the results of a large variety of reception conditions, at SF7 bandwidth 125000hz. The signals varied from very strong, the transmitter and receiver close together, to very weak signals where packet reception was failing. I plotted the results, see the graph below.

SNR versus RSSI Graph

You can see the spread of reported RSSI for SNRs of 10,9 and 8, the range is -45dBm to -105dBm. So clearly SNR is not a lot of help for giving a quality indication for very strong signals.

But then look what happens as the SNR drops, at the failure point, -8dB SNR and below, the RSSI varies from -100dBm to -120dBm.

For an overall quality indicator, perhaps a compromise is needed. If the SNR is say 7dB or higher, use the RSSI figure, and if SNR is lower than 7dB, use the SNR.

Just a thought.

Ping testing the Heltec Wi-Fi LoRa OLED device

This Heltec device looks very promising, it’s and ESP32 module with built in LoRa device Wi-Fi, Bluetooth and a OLED display to boot!

Heltec Module

I have tested some other micro controller modules where the faster processor produces enough electromagnetic interference (EMI) to reduce the LoRa sensitivity. This problem does not affect installations where the LoRa device antenna is remote to the micro controller, such as with a roof mounted antenna, but there is an effect if the LoRa antenna is local as it will be for a portable device or a small sensor node.

With some simple equipment it is not difficult to test for any adverse affect, and we can measure this effect in dBm.

To carry out a ping test you need a LoRa transmitter that repeatedly sends packets at the frequency, bandwidth, spreading factor and coding rate that you specify. The test is best performed in a large open field, to reduce the impact of external interference, from PCs, lights, Wi-Fi and reflections from nearby objects.

The ping test requires that you reduce the reception range of the packets to around 100m or less. This can be achieved by shielding the transmitter in a metal box and using attenuators in series with the antenna to significantly reduce the transmitted signal level . Even a SMA terminator may radiate enough signal to work. For the Heltec device comparison the transmitter, an Arduino ATMega Pro Mini and Hope RFM98 LoRa device were placed in a tin on a pole in the middle of the field.

Below, left to right, transmitter, ATMega receiver and Heltec receiver.

TransmitterATMega ReceiverHeltec Receiver

The receiver used to set-up the test is also an Arduino Pro Mini and Dorji DRF1278F LoRa device. It is set-up with the same LoRa parameters and has an LED and buzzer which activate whenever a packet is received. The transmitter is sending packets every second or so.

With the receiver in your hand you walk away from the transmitter until the LED\Buzzer stops. If you get 100m and your still receiving packets, you need to attenuate the transmitter some more and try again.

Lets imagine we have now set it up so the packets stop being received at 100m when the. Arduino Pro Mini receiver is used.

We then repeat the test, either with a different type of receiver, or perhaps a different transmitter or receiver antenna. This time the LED\Buzzer stops at 50m. So the reception distance in the first test is twice the second, and we know that twice the distance requires 4 times or 6dBm more power.

Thus by simple distance measurement we can conclude that the antenna (or receiver) used in the first test is 6dBm better than in the second. This technique for making dB measurements works if only one antenna is changed at a time, the receiver antenna is held at the same height and orientation for both tests, and the area for testing is large enough so that reflections from near objects are minimal.

Over to the local park I went and set-up the test as described above, the transmitter with an SMA terminator fitted and the lid missing from the box gave me 108m. LoRa settings were 434.4Mhz, bandwidth 125000, spreading factor 8, coding rate 4:5.

I then changes to the Heltec Wi-Fi LoRa receiver and repeated the test, I received packets up to 70m. Thus the Heltec board had circa 3.5dBm worse receiver performance.

Now although the test showed the difference between two receivers, its possible that the LoRa device on the Heltec board was below par. You could of course test a number of modules to get an average result, but that could get expensive. There is an alternative however, use the same LoRa device to test first with the ATMega receiver and then with the Heltec ESP32 board. When I have suitable breadboards to build these receivers, I will repeat the test.

Reducing project sleep mode current to 0.001uA

The nanoPower

(A Very low current sleep mode power controller)


This small PCB is designed for putting battery powered micro controller projects into extreme low current sleep mode and turn them back on again up to one month later.

With care an original projects design may allow it to get current consumption down to 10uA or perhaps a bit less in sleep mode. If a project has been designed without low power sleep mode in mind then a complete redesign is likely not an option.

The nanoPower can be used with most battery powered micro controller projects, old or new without changing the design. The nanoPower will reduce battery drain in sleep mode to almost unmeasurable figures, to less than 0.001uA.

The nanoPower is suitable for projects that are battery powered at up to 6V and connects between the battery itself and the projects battery input. There is reverse battery and over current protection. A 4 pin cable is required to connect the nanoPower to the projects I2C interface, 5v or 3.3v, for control. Simple software on the project configures the real time clock (RTC) on the nanoPower. Any sleep time between 1 second and one month can be selected.

On a once per hour wakeup the RTCs lithium backup battery should keep the nanoPower going for a year. There is a power switch on the board so that you can be sure to disconnect the project from the battery if need be.

To use the nanoPower connect it between battery and project as shown in the picture bellow and turn the power slide switch on (up).


To first use the nanoPower with a project you need to press and hold down the push button, left in the picture. This forces the power on. Keep the button pressed until your initial program has been loaded and run.

Any project using the nanoPower does needs as the first instructions in its program after reset or power up to write to the RTC and configure it to keep the power on.

Sample code has been tested for Arduino and Micropython.

My problem with receiver sensitivity and link budgets

It’s simple, when testing I cannot get even close to the figures manufacturers quote in their data sheets for the sensitivity of their radio devices. Which could be important, quoted receiver sensitivity is often used in calculation link budgets.

Two examples;

Hope RFM22B, which is a Si4432 based ISM band radio module, has a quoted sensitivity of -121dBm, but can you get this in real world tests? I for one could not.

Hope RFM98, which is SX1278 based LoRa module that quotes a sensitivity of -131dBm, I could not get close to that either.

Hope RFM22B\Si4432

I was evaluating the RFM22 for a long distance communications project so I was testing how much power I needed to cover a 40km hilltop to hilltop link. The RFM22B is a typical frequency shift keying (FSK) type transceiver.

The hills were above Cardiff (UK) and 40km away in the Mendip hills with the Bristol Channel in between. See the blue line 3 in the picture bellow, then the route profile and Fresnel zone profile.



Fresnel Zone

I was using ¼ wave wires on transmitter and receiver, data rate 1000bps. Packet reception was reliable for the the 40km with a transmit power of 100mW, with 50% of data packets being received at 50mW.

You can calculate the signal level the receiver should be seeing since we know the antenna type, transmit power and the free space loss for the distance (-117dBm). I calculated the receiver was seeing signals of -96dBm.

Now there is the problem; at just less that -96dBm, the RFM22B would not receive signals reliably, but the data sheet claims the sensitivity was -121dBm, what happened to the missing 25dBm?

Hope RFM98\SX1278 LoRa

I did the same test, over the same 40km path, with the same antennas, This time using a Hope RFM98 LoRa module, data rate 1042bps. These devices are supposed to be long range, and indeed they were, covering the 40km with a mere 3dBm (2mW) of power, so much better than the 100mW needed for the RFM22B.

I recalculated the received signal level for 2mW, it was 114dBm. This is the same issue as above, reception stopped at signals lower than 114dBm, yet the data sheet sensitivity is quoted as -131dBm, for the particular LoRa mode in use, so where has the 18dBm gone ?

The Impact of Noise

Imagine your in a large open field. 100M away a radio is playing, you can just hear the music. You have a radio as well tuned to a different station, so you turn it on. Can you still hear the radio 100m away? Very unlikely, the noise (literally) from your own radio drowns out the much quieter sound from the other distant radio.

Radio receivers work in much the same way, radio frequency noise, which is always present, can drown out very weak signals from far away.

What actually limits the ability of a radio to receive weak signals is most often its ability to discriminate the weak signals from the noise, this is the signal to noise ratio (SNR). Although its often not stated in the data sheets, a lot of FSK receivers (such as the RFM22B) need a signal that is between 5 and 10dB above noise level for successful decoding.

In the UHF band the receiver will typically be seeing around -100dBm to -105dBm of noise, the receiver itself may report this, take an RSSI reading with no signal present. If there is around -100dBm of RF noise present and the receiver needs an SNR of +5 to +10dBm to decode signals then the minimum signal level it needs to see is somewhere between -90dBm and -100dBm. So when the 40km test above identified a signal requirement of -96dBm for reception, that matches the SNR requirement.

Back to the LoRa device. The SNR figures for the various modes are given in the data sheet, for the mode used in the 40km test the quoted SNR is -10dBm. Thus if the noise level is -100dBm to -105dBm, the LoRa receiver would need around -110dBm to -115dBm of signal. So again when the 40km test above identified a signal requirement of -114dBm for reception, that matches the SNR requirement.


I don’t know where you might be able to receive signals at the levels that are quoted in device data sheets, somewhere very quite with virtually no RF noise, perhaps out by Pluto?

And of course will using quoted receiver sensitivity give you accurate results when calculating link budgets ?

Is LoRa affected by local EMI ? – ESP32 Tests.

It was mentioned in a previous post that a LoRa device can be significantly affected, i.e. performance reduced, by the presence of local Electro Magnetic Interference (EMI) from the power switches in USB power banks, read here for more detail;

The Perils of USB Power Banks

The power banks tested could reduce the link budget by up to 20dB, equivalent to cutting receive range\distance by a factor of 10. This impacts portable devices in the main where the LoRa device antenna is close to the source of the EMI.

I did mention in the above post that in equipment where the antenna is local to the driving Micro controller then the EMI that produces can also reduce link performance. The standard reference micro-controller is an ATMega328P running at 8Mhz or 16Mhz with linear power supplies. When compared to a Arduino DUE which runs at 80Mhz and uses switching supplies, the DUE lost around 5dB in link performance, which cuts receiver range in about half, so it is a significant effect.

The new ESP32 based controllers, look very attractive for LoRa projects. Lots of Flash and RAM, up to 160Mhz processor speed, built in Wi-Fi and Bluetooth. There are also some modules such as the Wemos Lolin32 that have built in charger support for a local lithium battery. These modules are native 3.3V, so none of the dark ages stuff and faff of using 5V Arduino Unos or Megas.

My Wemos ESP32 arrived and I needed a way to compare a ESP32 receiver set-up with a reference ATMega one. Using two different board and LoRa devices could introduce errors if the performance of the two different LoRa devices was not the same.

Several of my projects use LoRa devices in Mikrobus sockets so I was able to breadboard first a ESP32 module then a Arduino Pro Mini module using the same LoRa device plugged into the breadboard. The antenna was kept the same as well.

I ran the test first with the ATMega and ESP32 with LoRa parameters of 434Mhz, BW62500, SF12 which are long range settings. I then tested both micro controllers at BW125000 and SF8. The graph of results is below. In the graph a decrease in performance will shift the trace to the left.

ATMega versus ESP32

Some details of how the tests are carried out will be found here;

Can we improve LoRa device performance ?

At SF12 the red line of the ESP32 results is around 3.5dB behind the blue ATMega one, so it looks like the EMI from the faster ESP32 is degrading LoRa performance by that amount.

At SF8 the EMI has a lesser affect, with the ESP32 only loosing about 1dB when compared to the slower ATMega.

Completely screening the receiver circuit and supply can mitigate the amount of EMI the LoRa device sees.

I have one of the ESP32 modules with built in LoRa and OLED display on order and will be checking its performance versus a standard ATMega set-up.

Stuart Robinson

January 2018