Ram Mohana Reddy Guddeti received B.Tech (ECE) from S.V. University, Tirupati, India in April 1987, M.Tech (Telecommunication Systems Engineering) from IIIT Kharagpur, India in January 1993, and Ph.D (AI (Perceptually Motivated BSS Algos. for Next Gen Hearing Aids)) from The University of Edinburgh, United Kingdom in October 2005. He has been working as Professor of Information Technology at National Institute of Technology Karnataka (NITK) Surathkal, Mangalore, India since January 2009. He is currently working as Senior Professor i.e. Professor (HAG) of Information Technology at NITK Surathkal since October 2018. During this period (Aug. 2009 – Aug. 2019) he held several administrative positions, namely: Head of the Information Technology Department (Aug. 2009 – Aug. 2012 and Aug. 2015 – Aug. 2019) and Chairman of Central Computer & Data Center (Aug. 2012 – Aug. 2015), NITK Surathkal.
Prof. G. Ram Mohana Reddy has more than 35 Years of Experience in Teaching, Research, Administration, Consultancy and Knowledge Skills Development. Prof. Reddy has more than 250 research publications in reputed International Journals (IEEE, IET, Elsevier, Springer), Book Chapters, and the Conference Proceedings (ACM, IEEE, Elsevier, Springer).
Prof. Ram Mohana Reddy has successfully guided 12 PhD Scholars; 4 PhD Scholars and 1 Post-Doctoral Fellow are carrying out their research; Also guided 3 M.Tech (Research) and more than 40 M.Tech Theses; and 150 B.Tech Projects. His research interests include AI, Cloud/Edge/Fog and Mist Computing, Smart Agriculture/Building/Campus/City/Home/Factory, Social Multimedia/Social Network Analysis. He is a Senior Member of ACM & IEEE (USA), Life Fellow of IETE (India), Life Member of Computer Society of India & ISTE (India).
As part of Research Interactions, Prof. Ram Mohana Reddy has visited several countries including Australia, Belgium, Canada, China, France, Germany, Italy, Japan, Malaysia, Netherlands, Singapore, Switzerland, UAE, UK, USA etc. Prof. Reddy has visited world reputed universities like Cambridge, Harvard, Melbourne, MIT, NTU, NUS, Oxford, Peking, Philips Research/Technical University Eindhoven, Toronto, Tsinghua, etc.
Prof. Reddy got the prestigious Commonwealth Scholarship and Fellowship funded by the Ministry of Foreign and Commonwealth Affairs, Government of UK for Pursuing Doctoral Research at The University of Edinburgh, U.K during 2002-05. He also got State/National Merit Scholarships for pursuing School and College Education during 1975-87.
profgrmreddy@nitk.ac.in
profgrmreddy@nitk.edu.in
http://infotech.nitk.ac.in/faculty/ram-mohana-reddy-guddeti
DVP term expires December 2025
Presentations
Fog Based Frameworks for IOT/IIOT Service Placement and Data Analytics in Smart Application Environments
There is an exponential increase in Internet of Things (IoT) devices in smart environments to monitor and control activities. The use of IoT devices in these environments increased the computational and storage resources requirement. Cloud computing provides computational and storage resources, but it requires entire data to be transferred to the cloud. Using cloud computing for all IoT/Industrial IoT (IIoT) applications is not feasible as some of these applications are delay-sensitive and require service in real-time to avoid significant failures. Hence, a distributed fog computing architecture is developed to provide the computational and storage resources at the network edge to process and analyse the data. The main research challenges in a fog computing environment are: to realise the fog computing infrastructure on resource constrained devices using the virtualisation technique to provide the computational resources. Further, it is challenging to use these fog nodes for service placement and deploy a machine learning model for real-time data analytics. This research work focuses on developing fog frameworks for IoT/IIoT service placement and the machine learning model deployment to process and analyse the sensor data to reduce the service time and resource consumption and thus enable real-time monitoring of the smart environments. The Fog-Cloud computing environment is used to place the IoT/IIoT services based on the resource availability and deadline to address the above research challenges. The service placement problem in the fog-cloud computing environment is formulated as a multi-objective optimisation problem and a novel cost-efficient deadline-aware service placement algorithm is developed to place the services on the Fog-Cloud resources to ensure the QoS of the IoT/IIoT services in terms of deadline, service cost and resource availability. Using simulators or virtual machines based resource provisioning framework is not feasible as it takes more time and consumes more resources. Hence, the container-based fog computing framework is developed on 1.4 GHz 64-bit quadcore processor devices to realise the fog computing architecture on the resource constrained devices. Further, the service placement problem in the fog computing environment is formulated as a multi-objective optimisation problem and the meta-heuristic algorithms such as Elite Genetic Algorithm (EGA), Modified Genetic Algorithm with Particle Swarm Optimisation (MGAPSO) and EGA with Particle Swarm Optimisation (EGAPSO) are developed for IoT/IIoT service placement in the fog computing environment. The results show that using a hybrid EGAPSO based service placement on the fog nodes reduces service time, cost and energy consumption.
Using fog nodes for deploying the machine learning models to analyse the data reduces the size of the data to be transferred to the cloud, which might reduce the network congestion, reduce the service time and thus enable to make quick decisions. The fog server-based framework is developed as a prototype for intelligent machine malfunction monitoring in the Industry 4.0 environment. The various supervised machine learning models are developed and deployed on the fog server at the network edge to analyse the data and thus enable real-time monitoring in the smart industry / Industry 4.0 environment. The fog server framework is used for industrial machine monitoring at Smart Industry/Industry 4.0 to detect and classify the machine as normal and abnormal using the machine operating sounds. The experimental results show the machine learning models’ performance for the various machines’ sounds recorded with different Signal to Noise Ratio levels for normal and abnormal operations using Linear Prediction Coefficients (LPC) and Mel Frequency Cepstral Coefficient (MFCC) audio features. Using fog server prototype for monitoring will reduce the total time and thus avoids the significant machines failures in the industrial environment. Keywords: Abnormal, Containers, Energy Consumption, Industry 4.0, Internet of Things, IIoT, Malfunction Monitoring, Meta-heuristic, MFCC, Normal, Resource Provisioning, Service Placement.
Unobtrusive Context-Aware Human Identification and Action Recognition System for Smart Environments
A smart environment has the ability to securely integrate multiple technological solutions to manage its assets, such as the information systems of local government departments, schools, transportation networks, hospitals, and other community services. They utilise low-power sensors, cameras, and software with artificial intelligence to monitor the system’s operation continuously. Smart environments require appropriate monitoring technologies to provide a secure living environment and efficient management. Global security threats have generated substantial demand for intelligent surveillance systems in smart environments. Consequently, the number of cameras deployed in smart environments to record the happenings in the vicinity is increasing rapidly. The advancement of cameras such as Closed Circuit Television (CCTV), depth sensors, and mobile phones used to monitor human activities has resulted in an explosion of visual data in recent years. It requires considerable effort to interpret and store all of this visual data. Numerous applications of intelligent environments rely on the content of captured videos, including smart video surveillance to monitor human activities, crime detection, intelligent traffic management, human identification, etc. Intelligent surveillance systems must perform unobtrusive human identification and human action recognition to ensure a secure, smart and pleasant living environment. This research work presents various approaches using advanced deep learning technology for unobtrusive human identification and human action recognition based on visual data in various data modalities. This research work explores the unobtrusive identification of humans based on skeleton and depth data. Also, several methods for recognising human actions using RGB, depth, and skeleton data are presented. At first, a domain-specific human action recognition system using RGB data for a computer laboratory environment is presented. In this, a dataset of human actions specific to the computer laboratory environment is created. The dataset provides several samples for five different actions that occur in college computer laboratories. Also, human action recognition system based on transfer learning is presented for locating and recognising human actions. Human action recognition systems based on skeleton and depth data are developed and evaluated on different publicly available datasets using different evaluation metrics. The skeleton data-based action recognition mainly concentrates on the 3D coordinates of various skeleton joints of the human body. This presents several efficient action representation methods from the data sequence in skeleton frames. A skeleton data based human action recognition system places the skeleton joints in a specific order, and the distance between joints is extracted as features. A multi-layer deep learning model is proposed to learn the features for action recognition. In addition, multi-modal human action recognition systems are developed using skeleton and depth data. This presents an efficient image representation of human actions from the sequence of data in skeleton and depth data formats. Various deep learning models using CNN, LSTM, and advanced techniques such as attention is presented to extract and learn the features from image representation of the actions. The developed systems are evaluated on publicly available human actions datasets using standard evaluation protocols.
Human gait is one of the most useful biometric features for human identification. The vision-based gait data allows human identification unobtrusively. This research work presents deep learning-based human identification systems using gait data in skeleton format. In this, we present an efficient feature extraction method that captures human skeleton joints’ spatial and temporal features during walking. This specifically focuses on the features during different gait events in the entire gait cycle. Also, deep learning models are developed to learn these features for accurate human identification systems. The developed models are evaluated on publicly available single and multi-view gait datasets using various evaluation protocols and performance metrics. Keywords: Deep learning, Human action recognition, Human identification, Smart environments, Smart surveillance.
Development of Unobstrusive Affective Computing Framework for Students Engagement Analysis in Classroom Envrinroment.
Pervasive intelligent learning environments can be made more personalised by adapting the teaching strategies according to the students’ emotional and behavioural engagements. The students’ engagement analysis helps to foster those emotions and behavioural patterns that are beneficial to learning, thus improving the effectiveness of the teaching-learning process. The students’ emotional and behavioral patterns are to be recognised unobtrusively using learning-centered emotions (engaged, confused, frustrated, and so on), and engagement levels (looking away from the tutor or board, eyes completely closed, and so on). Recognising both the behavioral and emotional engagement from students’ image data in the wild (obtained from classrooms) is a challenging task. The use of the multitude of modalities enhances the performance of affective state classification, but recognising facial expressions, hand gestures, and body posture of each student in a classroom environment is another challenge. Here, the classification of affective states is not sufficient, object localisation also plays a vital role. Both the classification and object localisation should be robust enough to perform better for various image variants such as occlusion, background clutter, pose, illumination, cultural & regional background, intra-class variations, cropped images, multi-point view, and deformations. The most popular and state-of-the-art classification and localisation techniques are machine and deep learning techniques that depend on a database for the ground truth. A standard database that contains data from different learning environments with a multitude of modalities is also required. Hence, in the research work, different deep learning architectures are proposed to classify the students’ affective states with object localisation. A standard database with students’ multi-modal affective states is created and benchmarked. The students’ affective states obtained from the proposed real-time affective state classification method is used as feedback to the teacher in order to enhance the teaching-learning process in four different learning environments, namely: e-learning, classrooms, webinars and flipped classrooms. More details of this research work are as follows. A real-time students’ emotional engagement analysis is proposed for both the individual and group of students based on their facial expressions, hand gestures, and body postures for e-learning, flipped classroom, classroom, and webinar environments. Both basic and learning-centered emotions are used in the study. Various CNN based architectures are proposed to predict the students’ emotional engagement. The students’ behavioral engagement analysis method is also proposed and implemented in the classroom and computer-enabled teaching laboratories. The proposed scale-invariant context assisted single-shot CNN architecture performed well for multiple students in a single image frame. A single group engagement level score for each frame is obtained using the proposed feature fusion technique. The proposed model effectively classifies the students’ affective states into teachercentric attentive and in-attentive affective states. Inquiry interventions are proposed to address the negative impact of in-attentive affective states on the performance of students. Results demonstrated a positive correlation between the students learning rate and their attentive affective state engagement score for both individual and group of students. Further, an affective state transition diagram and visualisations are proposed to help the students and the teachers to improve the teaching-learning process. A multi-modal database is created for both e-learning (single student in a single image frame) and classroom environments (multiple students in a single image frame) using the students’ facial expressions, hand gestures, and body postures. Both posed and spontaneous expressions are collected to make the training set more robust. Also, various image variants are considered during the dataset creation. Annotations are performed using the gold standard study for eleven different affective states and four different engagement levels. Object localisation is performed on each modality of every student, and the bounding box coordinates are stored along with the affective state/engagement level. This database is benchmarked with various popular classification algorithms and state-of-the-art deep learning architectures. Keywords: Affective Computing; Affect Sensing and Analysis; Behavioural Patterns; Classroom Data in the Wild; Computer Vision; Multi-modal Analysis; Student Engagement Analysis.