The problem with co-ax cable connections to antennas

I have been experimenting and testing some simple antennas for 868Mhz, primarily for Internet of Things (IOT) applications.

A dipole used vertically has around the same performance as a 1/4 wave vertical with 4 x 1/4 radials, but the dipole is easier to build. In the test setup shown below I am using a LoRa module as a transmitter and measuring the RF field strength it produces at some distance.

Dipole and Co-ax 1The dipole is built on a BNC chassis socket and connected to the radio module module is mounted direct onto the antenna using some adapter, I did not quite have the right one so there is a BNC plug to plug in there. I will re-test when I have the right adapter, but the objective of the test was really to see what we can do to reduce losses when a co-ax cable is used to connect the antenna.

Mounting the module direct on the antenna like this is often not going to be convenient especially if the antenna is going outside.

Dipole and Co-ax 2So what happens when a length of co-ax is used to connect the antenna as shown in the second picture ?

The co-ax cable (which is 0.5M long) outer now becomes part of the antenna and acts to detune it. Compared to the setup in the first picture the co-ax causes the antenna output to drop by circa 5dBm, a significant loss. Whether this can be mitigated by retuning the antenna will be the subject of another article.

Dipole and Co-ax 3

So we are using the co-ax but the antenna performance drops significantly, what do we do about it ? One simple option is to use a clamp on choke of the type you see on monitor cables or power supplies.

The ferrite choke acts to stop RF travelling down the co-ax so it does not detune the antenna so much. The test result was that the transmitter was only putting out 1dBm less than in the first picture, which is a significant improvement over the 5dBm loss without the choke in place.


will be checking some other clamp on chokes to see if the same results occur with low cost chokes and longer cables too.

How to use low bandwidths with the LoRaTracker programs.

These comments relate specifically to my own tracker programs which can be found here;

Using a low LoRa bandwidth can improve the range\distance you can achieve. You can do much the same by increasing spreading factor. Range improvements have a cost, the packets take longer to transmit. At bandwidths above 25kHz local regulations may restrict you to 10% duty cycle, so a long range low data rate packet that takes maybe 5 seconds to send can only be sent every 50 seconds or so.

With a low LoRa bandwidth packet you can likely use greater duty cycles so you may be able to send the packet continuously, although do check your local regulations.

However, at the lowest possible LoRa bandwidth, 7.8kHz, the LoRa transmitter and receiver are very likely to be too far apart in frequency for packet reception to initially work and some combinations of modules may not work together even at 41.7khz. Once we have packet reception actually taking place at a low bandwidth we can use automatic frequency control (AFC) functionality to keep the receiver within a few hundred hertz of the transmitter, even if the transmitter frequency changes due to temperature shifts etc.

The LoRaTracker receiver programs can deal with this issue, more or less automatically, there is no need for manual adjustments of receiver or remote adjustments of the transmitter.

The HAB2 LoRaTracker program has a bind functionality, the transmitter sends a bind packet at start-up. The receiver can be put into bind mode and will pick up and use the LoRa settings of the transmitter. A consequence of this bind is that the receiver will also adjust to the frequency of the transmitter. However the bind functionality is kept short range on purpose, so if your receiver is some distance away, or is started up after the transmitter powers up, you wont be able to use the bind to frequency lock your transmitter and receiver.

By default the HAB2 tracker program puts out a binary location only packet (the search mode packet) at bandwidth 62.5kHz spreading factor 12 (SF12). This packet is only 12 bytes long so can be sent fairly often even at this ‘high’ bandwidth. Lets imagine the tracker mode packet (HAB payload) has been set to use a bandwidth of 7.8kHz SF8. If the transmitter and receiver are approximately 2kHz apart in frequency packet reception will not work.

The 62.5kHz bandwidth search mode packet has a much greater capture range of 15khz, so its unlikely transmitter and receiver will be far enough apart to stop packet reception working.

All we need to do is temporarily switch the receiver over to search mode, wait for it to receive a packet, then swap back to tracker mode. The reception of the search mode will change the receiver frequency to match the transmitter and the two should then stay locked together.

If your not using the search mode packet you could switch the receiver across to command mode listen and achieve the same automatic set-up.

The changes to the LoRa tracker programs to allow this to work are in this program;


To be found here;

You will also need the updated LoRaTracker library from here;

It would be straight forward to automate the use of the low bandwidths, just send out a regular very short higher bandwidth SF12 packet to act as an AFC calibrator. The receiver initially listens for this packet, does the AFC when its received and switches across to the low bandwidth mode.

The impact of bandwidth on LoRa sensitivity.

A LoRa device provides several options if you want to improve the reception range of the packets you are sending. All of these ‘improvements’ apart from increasing transmitter power result in slower packets that take longer to send.

The options are;

 Increase transmitter power

Increase the spreading factor (SF)

Reduce the bandwidth

Increase the coding rate

What I want to measure is the real world effect of reducing the bandwidth.

For a LoRa receiver to detect (capture) a packet, the transmitter and receiver centre frequencies need to be within 25% of the bandwidth of each other, that’s according to the data sheet. Myself I would err on the cautious side and plan for 20%, which I have found is closer to real world conditions.

So what does the 25% capture range mean?

Assume the transmitter has a centre frequency of 434Mhz and we are using a LoRa bandwidth of 500kHz. 25% of this is 125kHz, so as long as the receiver centre frequency is +/- 125kHz of 434Mhz, then the receiver should pick up the packets. Thus the receiver centre frequency need to be within the range of 433.875Mhz to 434.125Mkhz.

A lot of LoRa modules do not use temperature compensated crystal oscillators for their reference frequency but use low cost plain crystals instead, this helps to keep the cost of LoRa modules low. In practice I have seen 434Mhz LoRa modules at most +/- 5khz from 434Mhz, which is not bad for low cost crystals.

However one module might be up to 5kHz high, and another might be 5khz low, so the difference between two modules could be as high as 10kHz. Using the 25% of bandwidth rule you could have a situation where out of the box two LoRa modules might not communicate with each other at 41.7Kz bandwidth, and this has happened to some of the modules I have used. Its for this reason, and to significantly improve interoperability of devices, that The Things Network (TTN) and LoRaWAN typically use a bandwidth of 125kHz, it’s then unlikely that two devices will be far enough apart in frequency that communications fail.

Calibrating LoRa modules with a compensation factor, so that variations in centre frequency are eliminated, for a particular temperature, can allow lower bandwidths to be used. You do though need a frequency counter or similar to measure the centre frequency of each individual module.

Fortunately the LoRa device will, on reception of a packet, calculate an estimate of the frequency error between transmitter and receiver so it is possible for the receiver to track any transmitter frequency changes due to temperature shifts even at low bandwidths. Although do note that this AFC functionality will only work once packet reception has started ……….

Enough preamble ….. what I really wanted to measure was how much sensitivity improved as the bandwidth was reduced with the spreading factor kept the same. The table below shows the quoted sensitivity in dBm for a LoRa device and the data rate for the range of spreading factors and bandwidths.

Bandwidth vs Spreading Factor

In the test I carried out, I was using a spreading factor of 8 and a bandwidth of 7.8kHz versus 62.5kHz, the table shows that at 62.5kHz the sensitivity should be -128dBm improving to -139dBm at a bandwidth of 7.8kHz. The test I would use would not measure sensitivity itself, but rather how much transmit power was required to cover a link for each of the bandwidths. The quoted data sheet sensitivity is not so important, but how much power you actually need to cover a particular distance is.

This descending power method of testing I use is described in more detail here;

I set up my ‘typical’ 1km across town link so that at the bandwidth of 62.5kHz SF8, packet reception stopped when the transmitter power was around 14dBm. The transmitter antenna needed a 20dB attenuator fitted to achieve this, so in this case the 1Km was achieved with -6dBm of actual transmitted signal.

The test software sent 62.5kHz bandwidth test packets at 17dBm down to 2dBm in 1dBm steps, then flipped across to a bandwidth of 7.8khz and did the same. Automatic frequency control operated at the receiver and kept it within 250Hz or so of the transmitter, and gave reliable reception at the lower bandwidth.

The results are shown in the graph below, it shows how many packets of a particular power were received;

SF8 Bandwidth Comparison

The data sheet predicted a 10dBm improvement as the bandwidth changed from 62.5khz down to 7.8kHz and that is close to what the graph shows. Note the data rate dropped from 1563bps to 195bps. The 10dB improvement would represent a range improvement of around 3 times.

So what’s the point of low bandwidth?

An interesting question.

With bandwidths lower than 62.5kHz, you cannot assume that a transmitter and receiver will work together, due to changes in centre frequency. Calibration can help, but if one end of the link is very cold then the frequency could shift enough to stop the initial reception, so why bother?

There are two basic ways to improve LoRa link performance or range. First is to use a higher bandwidth and spreading factor, the other is to use a lower bandwidth and spreading factor.

Starting from a 62.5kHz bandwidth SF8 set-up you get around the same increase in performance and data rate by changing to SF12 as you would by staying at SF8 and reducing the bandwidth to 7.8kHz.

In a lot of places around the world, if you keep below a 25kHz channel you can use up to 100% duty cycle, I.e transmit continuously. Above 25kHz channels, or LoRa bandwidths above 20.8kHz, duty cycle may be limited by regulation to 10%. SF12 at a higher bandwidth might give you the same range as a lower bandwidth and spreading factor combination, but you may be significantly restricted by duty cycle as to how much data you can transmit in a given period.

So if you want to send a lot of data, that needs more than a 10% duty cycle it makes sense to use a lower bandwidth and spreading factor combination, if you can overcome the frequency capture issues mentioned above.

LoRa Signal Quality – RSSI or SNR?

It’s helpful to know how good the LoRa signals you are receiving are, or perhaps more importantly how close the are to failure.

The LoRa device measures two things during packet reception, the received signal strength indicator (RSSI) measured in dBm, and the signal to noise ratio (SNR) measured in dB.


From practical experiments I have observed that as the reported SNR approaches the limit specified for the spreading factor (SF8) then the packet reception will start to fail.

For instance at SF8 the SNR limit is -10dB. If your transmitting packets and they are being received at SNR of -0dB and you then reduce transmit power by 8 or 9dBm, the reported SNR will drop to around -9dB and packet reception starts to fail. I have found the reported SNR a very good indication of approaching reception failure.


Under very good reception conditions, with strong signals, the reported SNR rarely goes above 8dB, even as signals get stronger. So for very strong signals SNR is not a good indicator of signal quality.

I decided to plot the results of a large variety of reception conditions, at SF7 bandwidth 125000hz. The signals varied from very strong, the transmitter and receiver close together, to very weak signals where packet reception was failing. I plotted the results, see the graph below.

SNR versus RSSI Graph

You can see the spread of reported RSSI for SNRs of 10,9 and 8, the range is -45dBm to -105dBm. So clearly SNR is not a lot of help for giving a quality indication for very strong signals.

But then look what happens as the SNR drops, at the failure point, -8dB SNR and below, the RSSI varies from -100dBm to -120dBm.

For an overall quality indicator, perhaps a compromise is needed. If the SNR is say 7dB or higher, use the RSSI figure, and if SNR is lower than 7dB, use the SNR.

Just a thought.

Ping testing the Heltec Wi-Fi LoRa OLED device

This Heltec device looks very promising, it’s and ESP32 module with built in LoRa device Wi-Fi, Bluetooth and a OLED display to boot!

Heltec Module

I have tested some other micro controller modules where the faster processor produces enough electromagnetic interference (EMI) to reduce the LoRa sensitivity. This problem does not affect installations where the LoRa device antenna is remote to the micro controller, such as with a roof mounted antenna, but there is an effect if the LoRa antenna is local as it will be for a portable device or a small sensor node.

With some simple equipment it is not difficult to test for any adverse affect, and we can measure this effect in dBm.

To carry out a ping test you need a LoRa transmitter that repeatedly sends packets at the frequency, bandwidth, spreading factor and coding rate that you specify. The test is best performed in a large open field, to reduce the impact of external interference, from PCs, lights, Wi-Fi and reflections from nearby objects.

The ping test requires that you reduce the reception range of the packets to around 100m or less. This can be achieved by shielding the transmitter in a metal box and using attenuators in series with the antenna to significantly reduce the transmitted signal level . Even a SMA terminator may radiate enough signal to work. For the Heltec device comparison the transmitter, an Arduino ATMega Pro Mini and Hope RFM98 LoRa device were placed in a tin on a pole in the middle of the field.

Below, left to right, transmitter, ATMega receiver and Heltec receiver.

TransmitterATMega ReceiverHeltec Receiver

The receiver used to set-up the test is also an Arduino Pro Mini and Dorji DRF1278F LoRa device. It is set-up with the same LoRa parameters and has an LED and buzzer which activate whenever a packet is received. The transmitter is sending packets every second or so.

With the receiver in your hand you walk away from the transmitter until the LED\Buzzer stops. If you get 100m and your still receiving packets, you need to attenuate the transmitter some more and try again.

Lets imagine we have now set it up so the packets stop being received at 100m when the. Arduino Pro Mini receiver is used.

We then repeat the test, either with a different type of receiver, or perhaps a different transmitter or receiver antenna. This time the LED\Buzzer stops at 50m. So the reception distance in the first test is twice the second, and we know that twice the distance requires 4 times or 6dBm more power.

Thus by simple distance measurement we can conclude that the antenna (or receiver) used in the first test is 6dBm better than in the second. This technique for making dB measurements works if only one antenna is changed at a time, the receiver antenna is held at the same height and orientation for both tests, and the area for testing is large enough so that reflections from near objects are minimal.

Over to the local park I went and set-up the test as described above, the transmitter with an SMA terminator fitted and the lid missing from the box gave me 108m. LoRa settings were 434.4Mhz, bandwidth 125000, spreading factor 8, coding rate 4:5.

I then changes to the Heltec Wi-Fi LoRa receiver and repeated the test, I received packets up to 70m. Thus the Heltec board had circa 3.5dBm worse receiver performance.

Now although the test showed the difference between two receivers, its possible that the LoRa device on the Heltec board was below par. You could of course test a number of modules to get an average result, but that could get expensive. There is an alternative however, use the same LoRa device to test first with the ATMega receiver and then with the Heltec ESP32 board. When I have suitable breadboards to build these receivers, I will repeat the test.

Reducing project sleep mode current to 0.001uA

The nanoPower

(A Very low current sleep mode power controller)


This small PCB is designed for putting battery powered micro controller projects into extreme low current sleep mode and turn them back on again up to one month later.

With care an original projects design may allow it to get current consumption down to 10uA or perhaps a bit less in sleep mode. If a project has been designed without low power sleep mode in mind then a complete redesign is likely not an option.

The nanoPower can be used with most battery powered micro controller projects, old or new without changing the design. The nanoPower will reduce battery drain in sleep mode to almost unmeasurable figures, to less than 0.001uA.

The nanoPower is suitable for projects that are battery powered at up to 6V and connects between the battery itself and the projects battery input. There is reverse battery and over current protection. A 4 pin cable is required to connect the nanoPower to the projects I2C interface, 5v or 3.3v, for control. Simple software on the project configures the real time clock (RTC) on the nanoPower. Any sleep time between 1 second and one month can be selected.

On a once per hour wakeup the RTCs lithium backup battery should keep the nanoPower going for a year. There is a power switch on the board so that you can be sure to disconnect the project from the battery if need be.

To use the nanoPower connect it between battery and project as shown in the picture bellow and turn the power slide switch on (up).


To first use the nanoPower with a project you need to press and hold down the push button, left in the picture. This forces the power on. Keep the button pressed until your initial program has been loaded and run.

Any project using the nanoPower does needs as the first instructions in its program after reset or power up to write to the RTC and configure it to keep the power on.

Sample code has been tested for Arduino and Micropython.

My problem with receiver sensitivity and link budgets

It’s simple, when testing I cannot get even close to the figures manufacturers quote in their data sheets for the sensitivity of their radio devices. Which could be important, quoted receiver sensitivity is often used in calculation link budgets.

Two examples;

Hope RFM22B, which is a Si4432 based ISM band radio module, has a quoted sensitivity of -121dBm, but can you get this in real world tests? I for one could not.

Hope RFM98, which is SX1278 based LoRa module that quotes a sensitivity of -131dBm, I could not get close to that either.

Hope RFM22B\Si4432

I was evaluating the RFM22 for a long distance communications project so I was testing how much power I needed to cover a 40km hilltop to hilltop link. The RFM22B is a typical frequency shift keying (FSK) type transceiver.

The hills were above Cardiff (UK) and 40km away in the Mendip hills with the Bristol Channel in between. See the blue line 3 in the picture bellow, then the route profile and Fresnel zone profile.



Fresnel Zone

I was using ¼ wave wires on transmitter and receiver, data rate 1000bps. Packet reception was reliable for the the 40km with a transmit power of 100mW, with 50% of data packets being received at 50mW.

You can calculate the signal level the receiver should be seeing since we know the antenna type, transmit power and the free space loss for the distance (-117dBm). I calculated the receiver was seeing signals of -96dBm.

Now there is the problem; at just less that -96dBm, the RFM22B would not receive signals reliably, but the data sheet claims the sensitivity was -121dBm, what happened to the missing 25dBm?

Hope RFM98\SX1278 LoRa

I did the same test, over the same 40km path, with the same antennas, This time using a Hope RFM98 LoRa module, data rate 1042bps. These devices are supposed to be long range, and indeed they were, covering the 40km with a mere 3dBm (2mW) of power, so much better than the 100mW needed for the RFM22B.

I recalculated the received signal level for 2mW, it was 114dBm. This is the same issue as above, reception stopped at signals lower than 114dBm, yet the data sheet sensitivity is quoted as -131dBm, for the particular LoRa mode in use, so where has the 18dBm gone ?

The Impact of Noise

Imagine your in a large open field. 100M away a radio is playing, you can just hear the music. You have a radio as well tuned to a different station, so you turn it on. Can you still hear the radio 100m away? Very unlikely, the noise (literally) from your own radio drowns out the much quieter sound from the other distant radio.

Radio receivers work in much the same way, radio frequency noise, which is always present, can drown out very weak signals from far away.

What actually limits the ability of a radio to receive weak signals is most often its ability to discriminate the weak signals from the noise, this is the signal to noise ratio (SNR). Although its often not stated in the data sheets, a lot of FSK receivers (such as the RFM22B) need a signal that is between 5 and 10dB above noise level for successful decoding.

In the UHF band the receiver will typically be seeing around -100dBm to -105dBm of noise, the receiver itself may report this, take an RSSI reading with no signal present. If there is around -100dBm of RF noise present and the receiver needs an SNR of +5 to +10dBm to decode signals then the minimum signal level it needs to see is somewhere between -90dBm and -100dBm. So when the 40km test above identified a signal requirement of -96dBm for reception, that matches the SNR requirement.

Back to the LoRa device. The SNR figures for the various modes are given in the data sheet, for the mode used in the 40km test the quoted SNR is -10dBm. Thus if the noise level is -100dBm to -105dBm, the LoRa receiver would need around -110dBm to -115dBm of signal. So again when the 40km test above identified a signal requirement of -114dBm for reception, that matches the SNR requirement.


I don’t know where you might be able to receive signals at the levels that are quoted in device data sheets, somewhere very quite with virtually no RF noise, perhaps out by Pluto?

And of course will using quoted receiver sensitivity give you accurate results when calculating link budgets ?

Is LoRa affected by local EMI ? – ESP32 Tests.

It was mentioned in a previous post that a LoRa device can be significantly affected, i.e. performance reduced, by the presence of local Electro Magnetic Interference (EMI) from the power switches in USB power banks, read here for more detail;

The Perils of USB Power Banks

The power banks tested could reduce the link budget by up to 20dB, equivalent to cutting receive range\distance by a factor of 10. This impacts portable devices in the main where the LoRa device antenna is close to the source of the EMI.

I did mention in the above post that in equipment where the antenna is local to the driving Micro controller then the EMI that produces can also reduce link performance. The standard reference micro-controller is an ATMega328P running at 8Mhz or 16Mhz with linear power supplies. When compared to a Arduino DUE which runs at 80Mhz and uses switching supplies, the DUE lost around 5dB in link performance, which cuts receiver range in about half, so it is a significant effect.

The new ESP32 based controllers, look very attractive for LoRa projects. Lots of Flash and RAM, up to 160Mhz processor speed, built in Wi-Fi and Bluetooth. There are also some modules such as the Wemos Lolin32 that have built in charger support for a local lithium battery. These modules are native 3.3V, so none of the dark ages stuff and faff of using 5V Arduino Unos or Megas.

My Wemos ESP32 arrived and I needed a way to compare a ESP32 receiver set-up with a reference ATMega one. Using two different board and LoRa devices could introduce errors if the performance of the two different LoRa devices was not the same.

Several of my projects use LoRa devices in Mikrobus sockets so I was able to breadboard first a ESP32 module then a Arduino Pro Mini module using the same LoRa device plugged into the breadboard. The antenna was kept the same as well.

I ran the test first with the ATMega and ESP32 with LoRa parameters of 434Mhz, BW62500, SF12 which are long range settings. I then tested both micro controllers at BW125000 and SF8. The graph of results is below. In the graph a decrease in performance will shift the trace to the left.

ATMega versus ESP32

Some details of how the tests are carried out will be found here;

Can we improve LoRa device performance ?

At SF12 the red line of the ESP32 results is around 3.5dB behind the blue ATMega one, so it looks like the EMI from the faster ESP32 is degrading LoRa performance by that amount.

At SF8 the EMI has a lesser affect, with the ESP32 only loosing about 1dB when compared to the slower ATMega.

Completely screening the receiver circuit and supply can mitigate the amount of EMI the LoRa device sees.

I have one of the ESP32 modules with built in LoRa and OLED display on order and will be checking its performance versus a standard ATMega set-up.

Stuart Robinson

January 2018

Can we improve LoRa device performance ?

Can we improve LoRa performance is a question I have seen asked a few times. If we add additional bandpass filters or low noise amplifiers to a LoRa installation, will it improve performance and range ?

A bandpass filter should limit the amount of external noise coming into the LoRa devices antenna input. A lot of noise could de-sensitise the LoRa receiver. I operate mainly at 434Mhz and there is a bandpass filter made for this frequency. It has a stated performance of;


Centre frequency: 433.92 MHz
Pass band: 431.92 ~ 435.92 MHz
Insertion loss: < @431.92 1.5dB ~ 435.92 MHz

I needed to check this, so lacking a proper signal generator I programmed a LoRa device to scan the frequency band and put the bandpass filter on the input of my RF Explorer.

This was the output, first without the filter and then with.

No Bandpass Filter

With Bandpass Filter

Outside of 427Mhz to 443Mhz the attenuation was around 45dB, which should be good enough to get an idea as to whether a bandpass filter would improve LoRa reception, the insertion loss of 1.5dB looked about right too.

You might be inclined to measure the effects of bandpass filters or low noise amplifiers in a laboratory, but I am more interested in what happens in the real world. So we need a practical testing method to use in the real outdoors.

A LoRa transmitter can be programmed to send packets at power levels from 17dBm to 2dBm. If we send a series of packets that start at say 17dBm and then reduce by 1dBm all the way down to 2dBm, we can use these packets to measure the real world performance of the link.

Imagine we have a working link and all packets from 17dBm down to 10dBm are received. The packets sent at 9dBm and below are too weak to be received. We then change something in the link, a better antenna (TX or RX), add a bandpass filter or a low noise amplifier (LNA) and then run the same test over the same link. Imagine that after such a change we are receiving all the packets down to 5dBm. Whatever change we made has improved the link by 5dB.

We have established this 5dB improvement with no actual test measurement equipment, we used just standard antennas and simple LoRa transmitters and receivers. You do need to run this type of test over a large number of iterations but all that needs is patience.

I wrote the required link test software for Arduino, and you can use it any of my tracker or receiver boards. The LoRa parameters of bandwidth, spreading factor and code rate are all configurable as is the frequency. You use the same settings for transmitter and receiver of course. The transmitted packets contain the power level used to send them so they are easy to count.

The typical serial monitor output of the link test software whilst its running is shown below, note that there are lines showing the running totals (there were 94 17dBm packets received in the example below) which are printed in readable form and as CSV so we can cut and paste into a spreadsheet for graphing.

RX SNR,-1dB RSSI,-61dB RXType,T Destination,* Source,1 Length,6 (17dBm)

RX SNR,-2dB RSSI,-62dB RXType,T Destination,* Source,1 Length,6 (16dBm)

RX SNR,-2dB RSSI,-62dB RXType,T Destination,* Source,1 Length,6 (15dBm)

RX SNR,-3dB RSSI,-63dB RXType,T Destination,* Source,1 Length,6 (14dBm)

RX SNR,-4dB RSSI,-62dB RXType,T Destination,* Source,1 Length,6 (13dBm)

RX SNR,-7dB RSSI,-63dB RXType,T Destination,* Source,1 Length,6 (12dBm)

RX SNR,-9dB RSSI,-61dB RXType,T Destination,* Source,1 Length,6 (10dBm)

Mode1 Test Packets 867

Mode1 Test Cycles 93

Mode1 Max Power 17dBm

17dBm,94 16dBm,94 15dBm,94 14dBm,94 13dBm,94 12dBm,93 11dBm,90 10dBm,85 9dBm,71 8dBm,40 7dBm,13 6dBm,5 5dBm,0 4dBm,0 3dBm,0 2dBm,0


So for our real world test, we need a real world situation. For the receiver end I used a Diamond X50 UHF co-linear, a typical antenna for a TTN gateway perhaps. This antenna was on a mast approximately 5M from the ground in an Urban area.

The transmitter node used a simple 1\4 wave wire. TX and RX were my DRF1278F LoRaTracker boards that have SMA antenna sockets. The transmitter was positioned 1.5M off the ground in the middle of a house garden 1km away.

SF7 Tests

We first need to decide on our LoRa parameters for the test, I used 434Mhz, BW125000, SF7, CR4:5.

The minimum LoRa device power, 2dBm, was plenty to cover that 1km, so the transmitter needed to have the output power of the packets reduced a bit. I fitted a 12dB attenuator to the transmitter, I was then receiving packets down to around 12dBm.

The first test was to run with the LoRa receiver connected direct to the base station antenna. Next I put the bandpass filter in-line and ran the test again. Then I removed the filter and ran the test a third time with a SP7000 LNA between antenna and the LoRa device. This is a high performing LNA, costing circa £400. I knew from testing on the $50SAT project that this LNA added 12dB of useful signal gain to the FSK packets from the RFM22B, but how much would it improve LoRa, which is already operating below noise level ?

I also tried a more modest LNA the G4DDK, which can be bought as a kit for around £55.

With the figures collected, I used a spreadsheet and graph to visualise the results, see below. On the graph an improvement in performance will shift the trace to the right.Graph

The bandpass filter seems to have little beneficial effect, reception is in fact slightly worse (shifted left in the graph) and seems to match the reduction expected by the insertion loss of the filter.

The SP7000 LNA did have a beneficial effect link performance improved by around 5dBm, but not a much as would be expected with with non LoRa systems.

The lower cost LNA was also an improvement, at around 4dBm.

From this testing at SF7, then there is a small benefit to using an LNA, but at an increase in complexity and cost. The LNA either needs to be in a receive only path or you need the auto switching of the SP7000 or an additional antenna sequencer.

Next, I will see what happens if you carry out the same tests at SF12, which is operating at 20dB below noise level.

SF12 Tests

I repeated the same tests at SF12, and the results were a surprise. Since SF12 operates at up to –12.5dB bellow noise over SF7 (the previous tests described above) I had expected the LNAs to have a lesser effect, the reverse proves to be the case. See the graph below;

Bandpass Fliters and LNAs SF12

Here the the SP7000 LNA (yellow trace) added around 7dB signal gain over the Simple link (blue trace) and the G4DDK (green trace) added 3.5dB. More gain at a lower spreading factors is less of an issue since you can perhaps increase the spreading factor to compensate if you want more distance. However at SF12 you are at the limit, so the extra 7dB of link margin from a good LNA would approximately double the distance the SF12 packets would reach. This could have a significant benefit in some applications.

Stuart Robinson

January 2018

micro:bit Mikrobus Shield

I thought that a micro:bit would be a useful development platform for a simple sensor shield for Internet of Things monitoring applications. With the micro:bit you can quickly switch between programming in MicroPython or C\C++ under the Arduino environment.

The micro:bit has just enough I\O pins to allow for a single Mikrobus socket. These sockets allow the shield to be used with a range of plug in modules, including radio devices and the sockets eliminate the trailing wires that might otherwise be needed. The Mikrobus socket can be used for other devices and Mikroelectronica do a substantial range of compatible click modules, see link below;

Click Modules

microbit Mikrobus Shield Reduced

I have Mikrobus PCBs available to make modules for several of the LoRa devices, such as RFM96, RFM98, DRF1272, DRF1278, build cost of these modules is around £7. Use of a LoRa radio device with the shield will allow for reception of sensor data over several kilometres.

A remote sensor will normally be battery powered and this shield can be configured to run for many months, or years on battery power alone.

A ready built  low cost (£1) Real Time Clock module is used to provide shutdown and wakeup functions. In sleep mode the micro:bit shield consumes a very miserly 0.001uA from the battery. The actual current is likely close to zero, but measuring below 1nA is not so easy. Sleep time can be up to a month.

The battery connects direct to the shield, this would typically be 3 x AA batteries or a single Lithium Ion battery. The shield has its own low drop out regulator supplying 3.3V to the micro:bit and Mikrobus socket. Using the standard 2 Alkaline batteries for the micro:bit supply can be marginal as the supply voltage may be as low as 2.4V for most of the batteries life.

There are pin connectors on board for typical I2C sensors such as the BME280 or discrete sensors such as the DHT11 and DS18B20. There is a connector intended for a serial connection such as a GPS. On the rear of the PCB is a micro SD card socket.

It is straightforward to add text displays such as the ILI9341 or Nokia 5110 via an I2C connected backpack I designed. The I2C display backpack uses an Arduino Pro Mini and is easy to build. The backpack is a good fit for the micro:bit, it allows a display to be added using no additional I\O pins and the code library is very small. The software for the display backpacks has been tested on the micro:bit under MicroPython and the Arduino IDE.










These are the simple commands to drive the display in MicroPython;

WriteText(‘Hello World’)

Driving the display in Arduino is much the same.

So far in the Arduino environment I have tested the functionality of an SPI LoRa module, reading and programming a GPS and SD card read write. And the displays above.

I will next be testing the MicroChip RN2483 which is popular for The Things Network (TTN) applications. I have designed a Microbus PCB to test this module.


Stuart Robinson