Monday, April 17, 2017

Generating Aruba SSH login keys and certificate

I came across the need for one of my scripts to connect to an Aruba controller the other day and although I could have used a username/password option I decided on certificate based authentication just to learn something new.
The process is quite straightforward, but it took me a while to figure out. I chose Aruba as that is the vendor of choice where I work, but I'd say that the process would be similar for other vendors' gear (at least the certificate generation part).

To an Aruba controller you can only upload certificates not RSA signatures, so you must make a cert from a public/private key pair that we generate with the ssh-keygen command and then use openssl to generate the certificate from this pair that can be upload. (If anyone has a foolproof solution for doing this with only one these, please share it.)

Creating the key pair and certificate

1. First we create the priv/pub keys with ssh-keygen, where we provide the name for the key (ex. ssh-id_rsa). When asked for a password for the key I left it empty as that would mean that it would need to be entered every time the script would be run, which I didn't want.
ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/Users/primoz.marinsek/.ssh/id_rsa): ssh-id_rsa              >>>>>> PROVIDE A NAME FOR THE KEY HERE
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ssh-id_rsa.
Your public key has been saved in ssh-id_rsa.pub.
The key fingerprint is:
SHA256:v0ImnCOiUhFQhe8/DlE6jA8bPaJb+nZosjiJuRPHJu0 p.m@XXXYYYZZ.local
The key's randomart image is:
+---[RSA 2048]----+
|.o.o.            |
|  o              |
|   o  .          |
|  .+.o           |
| o=oB. .S        |
|o.B*o+= o.       |
|o@oooo =  .      |
|@+E ..o .  .     |
|BX.. ... ..      |
+----[SHA256]-----+


2. Next we need to create a certificate that we will upload to the controller. For this we use openssl to create a PEM public certificate from the private key "ssh-id_rsa. I gave it a life of 3650 days or 10 years in this example. When asked about the information to enter it's your choice whether you want to fill it in or not.
openssl req -x509 -new -key ssh-id_rsa -days 3650 -out ssh-id_rsa-cert.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
....

With these 2 steps you have now created the key pair and a certificate that you can upload to the controller.

Uploading and enabling the user for login

Next steps involve uploading of the certificate you just generated and creating a user to go with it. I'll continue with step 3 below, which starts with enabling using public key architecture for SSH-ing into the controller

Note that some steps involve using the WEB GUI to upload the certificate. I've gotten used to CLI in recent times and I use scp quite a bit, but I haven't found an elegant way of uploading things to a controller yet. I seem to be running into some cypher mismatches there.


3. This step involves enabling certificate option for SSH, which must be performed on a master controller ONLY. Enabling it on a local controller will not be allowed either from WEB GUI or CLI. 
You need to either ssh or browse to the controller via WEB GUI and under 
Management :: General :: SSH (Secure Shell) Authentication Method the 
Client Public Key 
needs to be enabled or alternatively do it much simpler over CLI as below

ssh mgmt-auth public-key


4. On controllers WEB GUI and go to Management :: Certificates and select tab Upload and fill in the fields as below. Note that this has to be done on every MASTER AND LOCAL controller in your topology.
  • "Name" of choice for the cert (ex. Aruba-mgmt-user-crt)
  • Select the file from your disk
  • Select PEM as "Certificate Format"
  • For "Certificate Type" select "Public Cert"
  • click "Upload"
5. Create a new user and use the SSH key for login (same as before MASTER AND LOCAL)
  • Go to Management :: Administration 
  • Add a new user under "Management Users"
    1. Under User Name input "ssh-global"
    2. Select the "Certificate Management" radio button
    3. Diselect "WebUI Certificate"
    4. Select "SSH Public Key"
      1. For Role select "root"
      2. For Client Certificate name select the previously uploaded certificate (Aruba-mgmt-user-crt then click Apply

Below is the CLI command that does this
mgmt-user ssh-pubkey client-cert "Aruba-mgmt-user-crt" "ssh-global" "root" 

  

6. I don't know why, but when my script logged into a controller it wasn't put directly into the "enable mode", but when logging in straight from the console I didn't have that issue. To not run into this issue run the below few commands to make sure you won't have problems there.
configure t
enable bypass
write mem

Authentication test

To test the connection follow the below procedure
  1. You must copy the "Aruba-ssh-id_rsa" key into your ".ssh" directory
  2. Run the below command and check you are logged in to the controller. Check the name of the controller
    ssh -i ~/.ssh/Aruba-ssh-id_rsa ssh-global@<the_controller>


If this doesn't work you might need to change the permission on the key with
chmod 400 Aruba-ssh-id_rsa

Hope this post helped in some way in your scripting endeavours and don't forget to share if it did, or if it didn't.

Monday, January 25, 2016

APs are... not HUBs!?!?

I come across a lot of misconceptions about WLAN networks and how they work in my day-to-day wireless business. This in the end leads to sub-par WLAN designs.

One of the biggest issues contributing to that is the lack of understanding of how the 802.11 protocol actually works. The channel occupation part that is.

As Keith Parsons (@KeithParsons) likes to say: "APs are HUBs!". And while the statement does drive the point of how you should think of wireless communication home, it's only half true (or at least not completely true).

Let's first look at what hubs are. Hub is simply multi port repeater. It's an archaic networking component which upon receiving bits on one port simply repeats that exact sequence on it's every other port. That means only one node can send bits at one time, all the other nodes receive those bits and process them if they are meant for them (if their hardware address is in the frames) otherwise they discard them.

This method of going networking is greatly inefficient and results in very slow speeds, so hubs have long been replaced by switches that have incorporated much better logic and far better hardware. Switches actually separate that one collision domain into as many new collision domains how many ports they have, greatly increasing efficiency and throughput.

What about the wireless world? Considering the hub explanation you might expect that ther's also repetition going on somewhere and that APs might be in charge of that, but you would be wrong. When a client device, associated to a particular AP, sends packets to a destination it sends it through that ap by setting the receiving MAC address of that AP in the frame, but the AP does not repeat those bits sent over the air. What does happen though is that those same bits, due to the the medium being unbounded, go everywhere and this is very important to understand.

In fact that's the way it must work! When a station, either an AP or a client, send out frames, every other station on that channel MUST hear those packets so they can set their timers to not "speak" at the same time. If it does, that's when you get those pesky hidden nodes that negatively impact your networks good-put.

So why is Keith saying that? Well it's a good one-liner that explains the situation and when talking to not technical people it's the one I use also, but as an engineer you should know what it really means.

So understand that APs are in fact not HUBs! They don't repeat the bits sent to them by client stations associated to them. It is rather the channel itself that is the HUB, and it must be that way for your network to function properly.

Thank you for reading my posts.

Tuesday, June 30, 2015

802.11ac – Evolution or revolution

There has been much talk of 11ac W2 recently. After getting into one such discussion and giving my own view on it, someone said to me, I must hate 11ac, which is definitely not the case. I definitely like it, but what I seriously dislike is the faceless user exploitation of some of the marketing practices that are branding 11ac as something it definitely is not and probably will not be, and making it seem that it’s a solution for all our woes. It was the same with 11n before it came out and then after it did, we immediately started looking for the next thing.
Using words as switch-like, gigabit wireless and what have you are wrong and don’t represent the technology correctly at all. I thought I’d give it my own view on what 11ac means to me and what I expect and not expect from it.
For a much deeper understanding into the below list of functions I seriously suggest you get the book "802.11ac A survival guide" from Matthew S. Gast.

256-QAM
Prior to 11ac the highest modulation was 64-QAM which is 4 times lower. This increased the throughput by about 20% which is always welcome, but the problem here is achieving it and consistently. Those who have done extensive tests on it say that the distance from the AP where this modulation can be achieved is only a few meters. This makes it very unpractical for almost all uses. The only one I can think of is a high density deployment where APs are placed under users, like under seats, tables or floors where the distance to users is very short and even then the only users that will probably be able to use it are the once in the immediate vicinity that don’t have the signal blocked by theirs or others body mass.

Wider channels
11n brought us 40MHz wide channels, which is a 2 fold increase to before and 11ac gives us 160MHz wide channels which is a 4 fold increase to 11n capability.
The question here is why anyone would ever go beyond the 40MHz mark for regular enterprise use or even home use. I can see 80 and 160MHz channels maybe being used in P2P links but other than that not really. Even having a Gigabit link is a rare case.
In the sense of channel widths 11ac is not revolutionary neither evolutionary, it will probably prove to be self-destructive.

MIMO
11n brought us multiple-in-multiple-out radio architecture. In fact it defined that a radio chain can be designed of up to 4 radios, hence the 4x4 nomenclature. This is an increase of times 4 to 11a/b/g. 11ac evolved from that to allow for up to 8 such chains which is a factor of 2 compared to 11n.
The gains of having more than 4x4 is highly questionable due to power requirements, design, return of investment and ugliness of such APs. But the more chains an AP has the more options it has with regards to beamforming, but I have reservations about that also.
So having more radios will not bring anything new to WiFi so it’s hard to call it even an evolution. IT’s just something the standard allows I guess.

Spatial streams
Prior to 11n WiFi was a one stream to one client affair, but 11n brought with it the ability to send 4 distinct streams to one station at a time, which is an increase of 4 times. 11ac continues with this trend and evolves an option of 8 simultaneous streams, which is an increase of 2 times of 11n.
But one needs to understand much more than just numbers here and realize that most client devices are at most 2x2. So whatever your AP is it will basically at best only match what the client is capable of, which means that majority of chains are wasted most of the time if APs don’t employ a different technique of sending data through redundant chains like STBC, and getting stable 3x3:3 connection to even capable clients is difficult and costs power and most APs and/or client devices will rather disable a chain or two or at most employ MRC to enable a better reception.
Another thing to also realize is that phones and phablets will only ever be 1x1 devices due to size and also power restrictions. Tablets will be at most 2x2 devices for the same reason. To integrate more radios and therefore antennas a device has to be the right size for it to even work and integrating an 8x8 chain the size of the device has to be enormous and even then it wouldn’t matter much.
Having more that 2x2 chains is marginally useful so in that respect 11ac is not a revolution at all, it’s hardly an evolution.

Beamforming
Beamforming was introduced with 11n as a big revolutionary idea that would increase the signal strength at the client device and/or would lower the amount of RF propagation lowering CCI an AP causes. But as the standard didn’t specify which BF method to use no one used any.
The only thing 11ac changed in that respect is that TxBF (as it’s called) now has a standard way of defining how to get information on a channel to employ proper weights to each radio in the chain. The catch here is that client devices must support it, which again is still rare, but given that there is only one way defined in 11ac as opposed to about 9 that were defined in 11n maybe we could see something there in the future.
As a side note I have my doubts about beamforming actually contributing in any big way in the real world either by lowering CCI or providing higher RSSI, but I don’t have much data to go on here. It’s more of a hunch and I could be wrong.

Throughput and efficiency
With 11n speeds increased from a “mere” 54Mbps to up to 600Mbps of throughput, a factor of about 10. 11ac promises speeds of up to 7Gbps which is s factor of 12. So 11ac hit and passed the Gigabit mark, a revolutionary step indeed… or is it? The fact is that these speeds can only be achieved through the use of multiple radio chains, spatial streams, wider channels and higher coding rates, all of which are very hard or should I say impossible to achieve due to many restrictions like power and size requirements, price and just pure laws of physics. So don’t expect 11ac capable devices to reach the Gigabit mark anytime soon if at all. But what one should be always striving at is to optimize their network to get devices on and off the medium as fast as possible with as little retries as possible and in turn get the highest average speed possible.

MU-MIMO
MU-MIMO is a very revolutionary idea. Up until now all standards defined PHY operation as one station occupying the channel at one time which with 11ac they would like to change through the use of MIMO and beamforming to get a better channel reuse.
The trick with this one is for all the receiving stations to be able to differentiate between different streams because every receiving station will receive every others data too. The analogy here is one of identical twins (or triplets) and being able to know which one is which. If one can’t tell the difference from them, how will they know from whom they need to take the data to get the information that’s meant for them?

Band operation
11n operates in both bands, whereas 11ac operates only in the 5GHz band. Although 80 and 160MHz channels can’t even be used in the 2,4G band (they don’t even fit there), 20 and 40MHz can and IEEE could have made the amendment stay in the “dead-band”, but took the opportunity of a new standard and decided against it.
This is a very big thing and for me it’s revolutionary. Even if you don’t agree it’s at least a very big evolutionary step.

Chipsets
Chipsets, as mostly everything anywhere else, evolve. But the rate of evolution is always dependent on outside factors. In that respect 11ac at least sped up this rate and with it every AP and client device that supports it is better off by it. If every node on the network can get on and off the channel faster more can use it and therefore speeding up the network for all.
Nothing revolutionary there but the speed of evolution was probably helped by 11ac introduction and that’s a very good thing for sure.

Conclusion
At the end of the day everyone is looking for more speed. At the start of 11n days it was touted as the cure for all our woes standard, due to the much higher speeds all the bells and whistles brought with it. And I would agree the throughput increase and the efficiencies put in the amendment solved some issues, but those speeds can only be achieved IF proper design principles are employed which, let’s be honest, are still few and far between. The real revolution won’t come with technology, but with realizing that knowledge is the essential ingredient that enables higher throughput, reliability and in the end happy users… or you can talk to your local sales representative to give you the right low-down.

Definite improvement:
  • Chipsets will evolve faster which means better with regards to RF characteristics that will enable faster throughput
  • Mandatory use of 5GHz only

Marginal improvement:

  • Beamforming is standardized, but requires sounding which requires bandwidth and I have reservations about beamforming effectiveness in general
  • Throughput will be higher but only if networks are designed properly
  • 256-QAM

Most likely useless features:

  • 80/160MHZ channels – self-destructive; maybe useful only in P2P links
  • Radio chains beyond 4x4 are unlikely due to power requirements, investment return and just shier ugliness of such APs
  •  MU-MIMO – possible to achieve in the long run, but what's the volume of scenarios where employed

Friday, November 7, 2014

Power comparison of ETSI channels in the 5GHz spectrum

If you are a WLAN engineer in the EU you probably need to know something about channels in the 5GHz band. Three main reasons come to mind. First is that being a WLAN engineer requires you knowing it, the second is that you probably get frequent "What is the max power your APs operate at?" and the follow up "How much do your APs cover compared to vendor X?" questions.

For the sake of pragmatism I'll be naming these two bands as 2,4G and 5G.


I've been asked this quite a bit in my time and I've found that the answer isn't as straight forward in the 5G as it's in the 2,4G band and for the WLAN designer adds a few variables to consider when planing a WLAN network.

I've come across a formal document specifying the use and operation of the UNII spectrum in the ETSI regulatory domain and what I found was intriguing. The FCC in the USA devides the 5GHz band into 4 subsets: UNII-1, UNII-2, UNII-2e and UNII-3 sub-bands. In the EU, to the best of my knowledge, we only use 1, 2 and 2e combined into 2 blocks. The UNII1 and 2 being in one block (A) and 2e in another (B). The two blocks have different usages, with block A permitted indoors only and block B permitted indoor or outdoor.


Each sub-band has a different number of channels available with 1 and 2 having 4 20MHz channels and 2e having 11 20MHz channels. The most intriguing thing I found were the EIRP allowances of those bands. As you can see from the below graph UNII-1 has a max EIRP of 23dBm (200mW), UNII-2 20dBm (100mW) and UNII-2e a whopping 27dBm (500mW). Below is the picture I painstakingly constructed with all the channels, EIRP, DFS and even boundaries - feel free to share.


The power difference among sub-bands means you can expect cell sizes to vary depending on the current channel used by a particular AP. 

But today's WLAN implementations aren't about how much you cover, but rather the capacity and rate-over-range (RoR) characteristics each cell has to be able satisfy different types of devices and services running on them. A sharp drop off at the edge is also very important to limit co-channel interference (CCI) as much as possible. So planing with this graph in mind wouldn't be too bad of an idea.

Another thing to consider here is the comparison with the EIRP in the 2,4G ISM band. Under ETSI max EIRP for 2,4G is 100mW (20dBm) (there might be exceptions) which is the same as the UNII-2 sub-band. The difference between 2,4G and 5G is roughly 6dB which means UNIIs 1 and 2 are somewhat handicapped. This lower EIRP means that the "coverage" of a particular AP working in those bands compared to 2,4G will be lower, maybe even to a point where you will need a second AP working just on 5G to produce the same coverage. Although that said, the noise floor will dictate that also and since 2,4G is getting more and more congested, the difference might not be that great these days.

But jumping on UNII2e wagon isn't as straight forward as well, since one has to consider other factors, like the fact it's under DFS rules and stations that don't support 11h can't probe which can make it a challenge for VoWiFi.

So I hope I've given you something to think about when designing your next WLAN and/or when troubleshooting an existing one. 

Comments and discussions are welcome as always.

Tuesday, September 30, 2014

Make a wall in Ekahau Site Survey

If you consider your self a WLAN designer, you're going to have to use a Site Survey tool and knowing how to construct walls in it will be the key to success. In this post I'll explain how to make a wall in Ekahau Site Survey software tool which as of this writing is on software version 7.6.2.

Ekahau Site Survey (ESS for short) comes predefined with a couple of walls but I'm yet to use any of them for a simple reason that no 2 walls are alike. This is due to the fact that walls in ESS are 3D modelled and any difference in thickens and/or attenuation value per meter will impact the result when modelling RF signal penetration through it.

To fully understand why this is important you can imagine an AP in front of a 10cm thick wall with some kind of attenuation level, for ex. 50dB/m. When a narrow beam of RF signal from the AP penetrates the wall perpendicularly i.e. hits the wall head on, it will loose a certain amount of energy while passing through those 10cm of wall material. But when another narrow beam of RF signal from the same AP hits the same wall at an angle, that distance traveled through the wall will be greater. If we take an example of 45° angle the distance traveled through the wall would be just over 14cm or 40% more than when the beam would hit it straight on.

So to compare the attenuation of the two beams:
- the head on beam would attenuate by 5dB
- the 45° beam would attenuate by 7dB.

Now if we were to use a predefined wall from ESS that would be for example 15cm instead of our 10cm with 50dB/m the following would be true:
- the head on beam would attenuate by 7,5dB (Δ of 2,5dB to the example above)
- the 45° beam would attenuate by 10,5dB (Δ of 3dB to the example above)

So it is important to get your measurements right in order to construct a proper wall in ESS and be able to make the right WLAN design for the building.


NOTE!
This post is only relevant if you are still using a version of ESS before v8. As of ESSv8 constructing a wall is made so much easier in the GUI. Ekahau indubitably made this change in response to this post. A fact which they will surely deny in public, but one that is true. OK, that last sentence is probably a fiction of my imagination, but that won't stop me from putting it in my CV.

Where to start?

To construct a wall you'll need to edit 2 files in your ESS installation directory. In the "/conf" directory you'll find various object types and properties files. That's where you'll also find the 2 files that you'll need to edit. These are "wallTypes.xml" that holds the actual data properties of a wall and the "wallTypes.properties" which is the key-to-name mapping file. The name in this file is the name that you'll see in ESS when selecting the wall button to draw the wall.


The "wallTypes.xml" file

The wallTypes.xml file consits of an xml schema. Each wall type is defined inside the <type> elements that house 4 other elements and a comment at the beginning. The comment here is especially useful in clearly explaining the properties of your wall and I suggest you use it too. The whole wall schema looks like the one below

<type id="wall_id">
        <!-- wall Xm YdB = Z dB/m -->
        <key>wall_key_mapping</key>
        <width>some width in meters</width>
        <absorption>attenuation per meter</absorption>
        <color>color HEX</color>
</type>

So you can just copy/paste one of the schemas and edit it. The elements inside the type element are pretty self explanatory but I'll go through them here:
  • key is the element used to map the wall property to the name of your property in the "wallTypes.properties" file
  • width is the actual width of the wall you are creating
  • absorption is the amount of attenuation the wall presents
  • color is the color you want it to appear in your ESS
I'm yet to understand what the id value in the type element is used for but you can make it the same as your key element just in case. 

The "wallTypes.properties" file

The key-to-name mappings file is pretty straight forward with lines of "key = name" mappings for each wall type that you make. Note here that ESS will add a dB attenuation value next to the name in brackets but not the width so I would suggest you name your wall with the width parameter, just so you'll know it. For example

my_key = My wall 10cm

and the wall will be visible like so



Don't forget to merge

There is one caveat of this feature. A new wall type will appear only for new projects you start after you have edited and saved both "wallTypes" files. For existing projects you have made before you edited those files you will need to use the File > Merge feature in order to be able to see the new wall in the drop down list of the wall. Then you can overwrite your previous project.

Final thoughts

Hopefully wall construction will become easier in the future as editing files is just not practical, plus there is a bug that appears whenever you make a change in an existing wall type ESS will show a duplicate of that wall in the GUI, but for now you know how to make a wall to fit the needs of the building you are planing the WLAN network for.

Sunday, August 17, 2014

802.11h in action

As you may or may not know, part of 802.11-2012 standard specifies the DFS or Dynamic Frequency Selection due to regulations that apply in most regulatory domains for RLANs in the 5GHz spectrum. DFS is there so that 802.11 radios don't interfere with other (more important) radar systems in the same radio vicinity. These are usually weather radars used by airplanes and having such a capability is a really good idea. DFS was originally part of 802.11h amendment which in turn is now part of the 802.11-2012 standard, but I'll refer to it as 11h.

In very short, the .11 radio in an AP wishing to operate on one of the UNII2e channels must continuously scan that channel for any presence of radar and must cease transmission on that channel if it detects a radar source. Some of the rules and procedures are written in this document from Cisco. 

The APs objective in changing channels is that disruption to the BSS is minimal and one of DFS procedures to help with this is the setting of the channel switch announcement element (CSA) in Beacon, Probe Response and/or Action management frames which tells it's associated STAs to which channel the AP is hopping and when. Below is an example of a beacon frame with a set CSA element. You can filter these frames out of a capture with "wlan_mgt.csa.channel_switch_mode" filter in wireshark.
The count value is the value of remaining beacons that will be broadcast on the current operating channel. The number here is 20 which indicates that 20 beacons (about 2 seconds) including this one are left before the channel change to channel 128 will be done. This number is decreasing with every sent beacon and when it reaches 1 you won't see another beacon or any other frame from the AP broadcasting BSS on the channel.

This assisting of changing channels isn't a guarantee that the STAs will actually accept the change and follow the AP to the new channel but most STAs will. STAs following the AP is only logical to do since the AP is handing the STA a new pipe to the net on a plate, but not all are made the same and some can switch to a new or even different BSS if they so choose.

If you've read this far first a thank you, and if you're asking your self why am I writing about this or who cares about channel changes I just wanted to  point out one vendors clever use of this function. Ruckus Wireless APs employ what their marketing calls ChannelFly. What their APs do is basically periodically hop through available channels in search of the one with best throughput characteristics. 'Everybody uses that. It's called background scanning' I hear you saying. Well CF differs from background scanning in that it doesn't go off channel to scan a different channel, but it just changes channels and operates on a new channel and takes measurements on that one and then hops again with the point of finding the one with the best characteristics of throughput and capacity. Each time it hops it uses the CSA element in it's beacons to hopefully take all it's associated STAs with it. 'Hopefully!?!', well for the most it does so without a problem. I've found many 5GHz STAs follow the AP without a problem, but some STAs might cause some problems. I have an HP laptop that operates in the 2,4GHz only and is supporting 11h, but upon a channel change it just gets lost and I have to disable/enable the NIC to get it running again.

My recommendation for CF would be to try it and see, but I consider it pretty safe when enabled on the 5GHz. For the 2,4GHz I suggest you try it and see if any STAs have problems when the AP is changing channels.

Monday, August 4, 2014

Fast & furious WLAN dB math

This post is a different look on Keith Parsons' "Easy dB Math in 5 minutes" but by no means a replacement. I've found that I learn thing faster if I see them from different perspectives and this is just that, a different perspective on the same subject.

There are only 2 things that you need to know really. The first is the linear to logarithmic conversion which Keith describes in Rule #2. This is what you need to remember.

+3dB = times 2 in linear form
-3dB = devide by 2 in linear form

And the other

10dB = 10 in linear form

The last one is important at the beginning when picking a reference point from which to start from. For about 99,9% of things in WLAN design the only 3 reference points will be 10dBm, 20dBm or 30dBm or 10mW, 100mW and 1000mW in linear terms. 

To convert it from one to the other just remember this: The number of zeroes in linear defines the first number of the dBm value and then you just ad a zero after that. For example

1000mW has 3 zeroes which you write as 30 and get the dBm value

And for the other way around the first number in dBm value (or dB or any other dBx value) defines the amount of zeros you add after number 1. For example

20dBm needs to have 2 zeroes after 1 or 100mW

Learn by doing

So to put this to practice, I've said that picking the right starting point is the key to fast conversion. For example, if we wanted to convert 27dBm to mW where would we start. The reference needs to be such that you can either add up to or down from it 3dB to the specified dBm value (27dBm) and then simply convert that to linear value. 

Let's first try to use 20dBm as reference. If we try to add up 3dB from that we couldn't get to 27dBm as 

20dBm +3dB + 3dB is 26dBm and
20dBm +3dB +3dB +3dB is 29dBm

So a better reference would be 30dBm since
30dBm -3dB = 27dBm

Since we know that 30dBm needs to have 3 zeroes after number 1 and -3dB means we need to divide that by 2 we can calculate that

1000mW divided by 2 is 500mW

We can make another example and convert 19dBm. In this example either 20dBm nor 30dBm would be the right starting points since we can't subtract 3dB from either of those to get to 19. But if we take the reference of 10dBm we can count up 3dB to it like so

10dBm +3dB +3dB +3dB = 19dBm

which translates to

10 x2 x2 x2 = 80mW

So as you can see it's pretty easy and hopefully you'll be translating linear to dB and vice versa easier now.