Table of Contents
This project is a POC for our paper "An Intelligent Q-Learning Approach for Energy-Efficient Channel Occupancy in NR-U Cellular Networks for Fair Unlicensed Spectrum Access"
A Simulator developed from scratch in Python to demonstrate various coexistence scenarios for NR-U and WiFi topologies in unlicensed spectrum.
Instructions to set up the project locally
- Python version 3.9+
- Install Python from official website
- Clone the repo
git clone https://github.com/chimms1/NRU-WiFi-Simulator.git
- Install Python packages (project dependencies)
pip install numpy pandas matplotlib seaborn tqdm openpyxl
main: has contents of rl-dfs branchrl-dfs: Contains algorithm with 7 states for Q-Learning based Dynamic COT Optimization.dyna-q: Contains algorithm with 21 states for Q-Learning Based Energy-Efficient COT Optimization. Load change can be toggled inConstantParamsand be used with Dyna-Q+.Power-State: (Deprecated, use dyna-q branch) Contains algorithm with 21 states for energy efficient Dynamic Frame Selection based on Q-Learning.CSMA/CA: Development branch used to test implementation of CSMA/CA algorithm present in main file.dev-y: Development branch, used for implementation and testing.
All the network entities are modeled into various classes
- Main class
Simulator/main-latest-all.py: this is the main file that runs the simulation - Service class
Simulator/running/ServiceClass.py: contains all the methods used to perform operations such as creating users, calculating resources and many more. - Params class
Simulator/running/ConstantParams.py: contains the parameters set by the user. - Verbose class
Simulator/running/Print.py: contains the flags to print specific information and plot graphs. - BaseStation class
Simulator/entities/BaseStation.py: contains the class definition for NR-U BS and Wi-Fi AP. - UserEquipment class
Simulator/entities/UserEquipment.py: contains the class definition for NR-U and Wi-Fi User equipment - Learning class
Simulator/Qlearning/learning.py: contains the class definition for Q-learning (reward function, QTable operation, Actions)
- Set the desired number of users, Number of iterations, Noise, pTx, datarate profile, etc in
ConstantParams.py. - Set exploration-exploitation iterations accordingly in
learning.py - Set flags in
Print.pyto print information. - Do additional configurations if required.
- Run
main-latest-all.py
python main-latest-all.py <seed-value>- Setting a seed value will help in recreating UE deployments.
- Currently, only a single NR-U BS and WiFi AP can be used.
- Increasing users to more than 30 may cause a decrease in SINR.
- Exceptions are not handled in many cases.
- Internal code may lack documentation in a few places.
- Complete research work done in this project will be published in 2025-26.
The experiment setups and their accumulated data are present in the directory Results-Generation in main branch.
Distributed under the AGPL-3.0 License. See LICENSE.txt for more information.
Supervisor: Dr. Vijeth Kotagi
- Yash Deshpande
- Shreyas Joshi
- Ramita Commi
- Gurkirat Singh
Resources that we found helpful