Analytics India Magazine | AIM https://analyticsindiamag.com/ AIM - News and Insights on AI, GCC, IT, and Tech Thu, 13 Feb 2025 06:21:10 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Analytics India Magazine | AIM https://analyticsindiamag.com/ 32 32 ‘Soon, We Might Have One Humanoid for Every Human,’ Says Addverb Co-Founder https://analyticsindiamag.com/deep-tech/soon-we-might-have-one-humanoid-for-every-human-says-addverb-co-founder/ Thu, 13 Feb 2025 06:01:46 +0000 https://analyticsindiamag.com/?p=10163474

Satish Shukla believes it will take another three to four years for humanoids to become as prevalent as humans.

The post ‘Soon, We Might Have One Humanoid for Every Human,’ Says Addverb Co-Founder appeared first on Analytics India Magazine.

]]>

Over the past few years, humanoids have become the talk of the internet, thanks to breakthroughs in artificial intelligence (AI), machine learning (ML), synthetic data and robotics. Companies worldwide are racing to deploy intelligent, autonomous humanoids that can work alongside humans in industrial environments.

As seen in recent collaborations between major AI and robotics firms, such as Apptronik, Google DeepMind, NVIDIA and Foxconn, companies strive to create versatile robots capable of dynamic, real-world tasks. 

With an increasing focus on efficiency, safety, and automation, humanoids are poised to become an essential part of the workforce. Companies like FigureAI, Tesla, UBTECH, Agility Robotics, and Addverb are now exploring humanoids.

In an interview with AIM, Satish Shukla, co-founder of Addverb, discussed the company’s plans to launch its humanoid this year and his views on the future of humanoids.

Shukla believes it will take another three to four years for humanoids to become as prevalent as humans. “For every human, we probably might have one humanoid,” he said.

Addverb focuses on AI, computer vision, and gesture-based control to expand its portfolio of autonomous and semi-autonomous robotic solutions. The company also claims to address industrial bottlenecks in warehouse operations, e-commerce logistics, and security automation.  

Humanoids to Enter Soon

As per Shukla, Addverb aims to launch its first humanoid robot in Q4 (October to December) of 2025. Unlike traditional robotic automation, humanoid robots offer the advantage of seamless integration into human-centric environments without requiring major infrastructure changes.  

“We don’t see it as a consumer product per se, but it could find application in…a burger joint, but not for a personal user or consumer-based,” Shukla added. 

The robot could function in workplaces such as warehouses, healthcare facilities, and security operations, where automation has yet to fully replace manual processes. He also noted that while humanoid robotics is advancing, widespread adoption will take time.  

These robots will evolve through iterative upgrades, ensuring robustness and adaptability for various industries. The company is leveraging its partnerships with Reliance and other innovation-focused firms to refine and test its humanoid solutions before a large-scale rollout.

In November last year, Addverb’s co-founder and CEO, Sangeet Kumar, said that the company would initially unveil 100 robots in 2025, which would be used across various industries, including energy and retail.   

He noted that the humanoids are advancing at a pace comparable to those in the US, China, and Europe in terms of speed, height, and dexterity. The focus is on commercialising military-grade robots and exploring their deployment on Mars.

Trakr 2.0 Advances Quadruped Robotics  

Trakr 2.0, to be launched at LogiMAT India 2025, is an upgraded version of Addverb’s quadruped robot designed for industrial, security, and healthcare applications. It features a payload capacity of 20 kg, a 90-minute battery life, and stereo cameras for improved autonomous navigation.  

Unlike conventional wheeled robots, Trakr 2.0 operates on four legs, allowing it to traverse complex industrial terrains. It takes the company from “wheeled robotics to legged robotics”, according to Shukla. The robot is intended for warehouse automation, security patrolling, and disaster management applications.  

Addverb previously introduced quadruped robots for warehouse coordination, but Trakr 2.0 expands its use cases to healthcare logistics, surveillance, and industrial maintenance. 

“There will be a fleet of quadrupeds that will display agility, coordination, better planning, efficiency and overall collective intelligence of how a group of quadruped can perform different tasks,” Shukla said.

The robot’s integration with AI-based perception systems positions it as a potential alternative to traditional security and monitoring solutions. The stereo cameras enhance vision-based navigation, allowing the robot to function in low-light and dynamic environments.

Shukla expressed that the robotics and automation industry requires a specialised skill set, but the availability of trained professionals remains limited. Addverb acknowledges this gap and focuses on hiring young engineers willing to learn and experiment. Addverb has a global workforce, with professionals from over 60 nationalities contributing to product development. The company runs internal upskilling programs to prepare training graduates from campuses.

Warehouse Automation Gets Smarter with Gesture-Based Picking

Syncro is Addverb’s collaborative robot designed to automate picking operations with AI-driven precision. It enhances efficiency by executing pick-and-place tasks with minimal manual intervention. 

With a payload capacity of up to 10 kg, an extended reach of 1,300 mm, and a speed of 1 m/s, Syncro ensures faster order fulfilment while maintaining 99.9% picking accuracy.  

The robotic picking arm grips and lifts items from storage containers with precision. Customisable end-of-arm tooling and multi-axis movement make it adaptable for handling various stock-keeping units (SKUs). Syncro also facilitates decanting by efficiently transferring items from source bins to destination bins, optimising warehouse automation.

Another innovation, HOCA (high-order carousel automation), is an advanced batch-picking system designed for warehouses that require high-volume, space-efficient automation.

The system can handle payloads of up to 54 tons while operating within a compact picking zone of 5–10 square metres.

Its design enables automated goods retrieval, eliminating the need for manual aisle navigation. The system claims to ensure an accuracy rate exceeding 99%, reducing errors in inventory management and optimising order fulfilment workflows.  

According to Shukla, HOCA is built for e-commerce fulfilment centres, dark stores, and large-scale industrial storage. The batch-picking feature allows multiple orders to be processed simultaneously, a function aimed at businesses experiencing seasonal demand spikes. 

Moreover, Brisk, another launch by the company, introduces gesture-recognition technology into warehouse operations, allowing workers to execute tasks with minimal manual effort. The system incorporates a glove-based EAN reader that facilitates barcode scanning, accelerating the order processing cycle.  

Designed to reduce non-value-adding movements, Brisk replaces traditional walking-and-picking workflows with an automated goods-to-person system. 

The interface adapts to varied lighting conditions, ensuring smooth operation in low-visibility warehouse environments.  

Brisk builds on Addverb’s existing work in computer vision, reinforcement learning, and AI-driven automation. The product will target e-commerce, large-scale retail warehouses, and manufacturing hubs where manual picking inefficiencies impact productivity.  

Why Reliance?

Reliance is a strategic investor in Addverb, which provides a platform for developing and testing automation solutions across multiple industries. With business verticals spanning petrochemicals, retail, digital services, fashion, e-commerce, and new energy, Reliance offers a broad environment for deploying robotics and automation technologies. 

As Shukla explained, Addverb has worked on several key projects within Reliance, including a remote ultrasound solution, a 5G-enabled robot for the Jamnagar Refinery, and automation for returns handling in retail, grocery, and fashion operations.

This partnership enables Addverb to scale its solutions rapidly, allowing for real-world testing in a collaborative environment. 

Implementing automation at Reliance’s operational scale gives the company a controlled environment to refine its products before expanding globally. The relationship also accelerates product development and market entry, helping Addverb bring innovations to market faster.

While Reliance plays a crucial role in infrastructure and deployment, Addverb is also working with Intel, NVIDIA, and Siemens to develop robotics and automation solutions. 

While Addverb competes with major industrial robotics firms, including ABB, which focuses on industrial robotic arms, mobile robots, and cobots, competitors rely heavily on simulation and synthetic data for robot path planning. 

Addverb takes a different approach by controlling the entire end-to-end supply chain, from conceptualisation to deployment. The company has dedicated in-house teams for research and development, embedded systems, electrical engineering, manufacturing, and software development.  

This structure allows Addverb to reduce hardware complexity, cost, and weight. The ability to implement over-the-air (OTA) updates ensures continuous improvement post-deployment. 

Addverb has also developed custom controllers, enabling better optimisation of robotic functions. With over 350 customers and its own manufacturing facility, the company generates large-scale real-world data to refine automation systems.

The post ‘Soon, We Might Have One Humanoid for Every Human,’ Says Addverb Co-Founder appeared first on Analytics India Magazine.

]]>
Indian Army Embraces Smart Warfare with AI-Powered Combat Systems https://analyticsindiamag.com/deep-tech/indian-army-embraces-smart-warfare-with-ai-powered-combat-systems/ Wed, 12 Feb 2025 06:23:44 +0000 https://analyticsindiamag.com/?p=10163298

Major Raj Prasad’s latest innovations, the Xploder and the MRMS, were showcased at India Pavillion to defence minister Rajnath Singh.

The post Indian Army Embraces Smart Warfare with AI-Powered Combat Systems appeared first on Analytics India Magazine.

]]>

The Indian Army has been rapidly embracing AI and autonomous systems to enhance national security while minimising human risks in combat. As modern warfare evolves, the military is prioritising indigenous innovations, ensuring self-reliance in defence technology. 

This transformation aligns with India’s Aatmanirbhar Bharat initiative, which seeks to reduce dependence on foreign military imports.  

India has been an early adopter of generative AI in defence. In 2018, the country established the Defence Artificial Intelligence Council (DAIC) to drive innovation. 

Despite its efficiency, AI in combat remains a challenge. While AI enhances operations, human decision-making remains superior in unpredictable battle scenarios. For now, AI’s role is focused on predictive maintenance and manned-unmanned operations. 

Meet Major Raj Prasad  

Major Raj Prasad, a service innovation officer in the Indian Army Corps of Engineers from the Army Design Bureau and the 7 Engineer Regiment, is at the centre of this change. He has developed groundbreaking innovations that could redefine battlefield tactics. 

Major Prasad has demonstrated homegrown defence solutions by developing twelve cutting-edge military technologies, four of which have already been inducted into the Indian Army.

Speaking to AIM at the Aero India 2025 event in Yelahanka, Bengaluru, he described his innovations as fully operational battlefield solutions designed to increase combat effectiveness and reduce casualties.  

His latest innovations, Xploder and the Mobile Reactive Mine System (MRMS), were showcased to defence minister Rajnath Singh at the India Pavillion. 

These revolutionary systems have also gathered national attention, even drawing interest from Prime Minister Narendra Modi and Indian Army chief general Upendra Dwivedi. Their induction into the Indian Army marks a significant shift in how India prepares for future conflicts.

Xploder: The AI-Powered Kamikaze  

One of Major Prasad’s most promising innovations is Xploder, an unmanned ground vehicle (UGV) designed to enhance safety in counter-insurgency and counter-terrorism operations. 

This six-wheeled, all-terrain vehicle is built for high-risk scenarios where human intervention can be fatal.  

Xploder is capable of reconnaissance, explosive payload delivery, and (improvised explosive device) IED disposal. It can be remotely controlled to enter dangerous areas, identify threats, and neutralise them without risking human lives. 

“In the aspect of room intervention in counter-terrorism operations, going inside each and every room and searching is difficult due to the risk of casualties because the militant can be anywhere, and it is a long-drawn process. So, this is used for reconnaissance in case a militant is found,” Major Prasad said.

Additionally, it can function as a kamikaze device programmed to detonate in enemy hideouts, making it a formidable weapon in urban warfare. The Indian Army is already considering mass procurement of Xploder, signifying its importance in modern military strategy.  

MRMS: A Walking Mine Hunting Its Target  

Traditional landmines are static and pressure-activated worldwide, which pose risks even to friendly forces. Moreover, they are used for defence purposes, mostly to harass, deny, and delay the enemy. 

Major Prasad’s MRMS introduces a radical departure from conventional mine warfare. This advanced mine system mimics the mobility of a spider, actively searching for its target instead of waiting for them to step on it. He calls it a “reactive mine”.

The MRMS can be deployed via unmanned aerial vehicles (UAVs), drones or ground vehicles, allowing it to be dropped directly into enemy zones. 

Once activated, it navigates towards enemy vehicles and detonates underneath them. This ability makes it a highly effective weapon for disabling armoured formations. 

“For example, if a tank (a column of a squadron of tanks) is coming, you can just send it across in the path of the tank, under the belly of the tank, and it can blast. This is going to create a big defensive aspect in the enemy’s area of response (AOR).”

With Economic Explosives Limited (EEL) partnering for its production, MRMS is set to become a crucial part of India’s defence arsenal.

Upcoming Innovations 

It doesn’t just stop here. Major Prasad is already working on new AI-driven combat systems to further modernise the Indian Army. 

One of his key projects is an AI-enabled mine detection system that aims to reduce the risks associated with traditional demining methods. He is also developing Agniastra, a multi-target portable remote detonation system capable of neutralising targets from five to ten kilometres away. These innovations indicate that India is not just catching up with global military technology but setting new benchmarks for autonomous combat systems.  

Moreover, the Indian Army also announced recently that it is set to retire 4,000 mules that have served in remote and mountainous regions and replace them with AI-powered robotic dogs

As displayed in the parade for 77th Army Day in Pune at the Southern Command Investiture Ceremony 2025, India is the second nation to feature this technology after China.

These robotic dogs are designed to replace mules in high-altitude warfare. These robotic quadrupeds can navigate challenging terrain while carrying payloads equipped with thermal cameras and 360-degree sensors. 

They can carry payloads of up to 12–15 kilograms and operate in extreme temperatures ranging from -40 to 55 degrees Celsius.

Globally, militaries in the US, China, and Russia are investing in robotic warfare, and India is following suit. The transition to robotic logistics reflects the growing importance of automation and AI in military operations.  

What’s Next? 

At the end of last year, the Indian Army collaborated with BEL to launch the Indian Army AI Incubation Centre (IAAIIC) in Bengaluru. Army chief Dwivedi virtually led the launch, underscoring the Army’s commitment to AI for operational excellence.

In just six months, systems like Vidyut Rakshak, Agniastra, and Xploder have moved from development to deployment, reinforcing India’s commitment to self-reliance. His previous development, the wireless electronic detonation system, has already been integrated, proving the Army’s commitment to rapidly absorbing indigenous solutions.

For the first time, the Transfer of Technology (ToT) has also been transferred to private defence manufacturers through the Army Design Bureau, fostering large-scale production and strengthening India’s defence ecosystem. 

Recognised by top leadership, this milestone reflects the Army’s dedication to technological evolution. As the ‘Year of Technology Absorption’ progresses, this seamless transition from innovation to induction is setting a new standard for India’s defence modernisation.

With the direction of displays this year, the coming years could see the mass deployment of smart, unmanned combat systems, from autonomous reconnaissance vehicles to AI-driven missile defence networks. By placing a strong emphasis on AI, automation, and indigenous production, the Indian Army is ensuring that it remains prepared for future conflicts.

The post Indian Army Embraces Smart Warfare with AI-Powered Combat Systems appeared first on Analytics India Magazine.

]]>
Aero India 2025 is All About Aatmanirbhar Bharat with DRDO’s Next-Gen Tech https://analyticsindiamag.com/deep-tech/aero-india-2025-is-all-about-aatmanirbhar-bharat-with-drdos-next-gen-tech/ Tue, 11 Feb 2025 10:44:45 +0000 https://analyticsindiamag.com/?p=10163216

Defence minister Rajnath Singh declared that 2025 will be the ‘Year of Reforms’ for Indian defence.

The post Aero India 2025 is All About Aatmanirbhar Bharat with DRDO’s Next-Gen Tech appeared first on Analytics India Magazine.

]]>

India’s premier defence research organisation, Defence Research and Development Organisation (DRDO) is making a powerful statement at Aero India 2025 with a showcase of indigenously developed cutting-edge technologies and systems. 

At the heart of the display is the full-scale model of India’s first 5.5 Gen stealth aircraft, the Advanced Medium Combat Aircraft (AMCA), which symbolises the country’s strides in advanced aviation technology. 

The India Pavilion, a testament to the Make-in-India initiative, brings together innovations from private industries, Defence Public Sector Undertakings (DPSUs), and start-ups. It displays over 330 products across 14 technology zones.  

DRDO’s exhibit features state-of-the-art fighter aircraft models, advanced missile systems, and naval warfare technologies. The key highlights include the Twin Engine Deck-Based Fighter (TEDBF), the LCA-Mk2, the Kaveri Derivative Aero Engine, and the Naval Anti-Ship Missile-Medium Range (NASM-MR). 

In addition to its exhibition, DRDO is hosting a seminar titled ‘DRDO Industry Synergy towards Viksit Bharat: Make in India – Make for World’ to promote self-reliance and boost defence exports. 

Inauguration by Defence Minister

Aero India 2025, Asia’s largest air show, commenced on February 10 at the Air Force Station in Yelahanka, Bengaluru. The show focuses on technological advancements in the aerospace and defence sectors. 

The biennial event, organised by the Defence Exhibition Organisation under the defence ministry, brought together global aerospace leaders, defence strategists, and government officials. 

This year’s event is being held under the theme ‘The Runway to a Billion Opportunities’, which underscores India’s ambitions in aerospace and defence innovation. 

It was inaugurated by defence minister Rajnath Singh, who highlighted India’s rapid advancements in defence technology, its growing industrial capabilities, and its vision for international collaboration. 

In his opening remarks, Singh emphasised that Aero India 2025 is not just a platform for showcasing technological innovation but also a bridge for strengthening global partnerships. 

“We often interact as buyers and sellers, where our relations are at a transactional level. However, at another level, we forge our partnership beyond the buyer-seller relationship to the level of industrial collaboration,” Singh added, stressing security and stability.

He declared that 2025 will be the ‘Year of Reforms’ for the Indian defence, emphasising that reforms will not only be limited to the government level but will involve active participation from the armed forces, defence PSUs, and private industry.  

Moreover, Singh highlighted India’s growing role in global defence and urged long-term industrial collaborations beyond buyer-seller ties. He cited the Tata-Airbus C-295 aircraft project as a model for future cooperation. 

He highlighted India’s commitment to defence exports and indigenous production and noted the sector’s rapid growth. Notably, defence production is expected to exceed ₹1.60 lakh crore and exports ₹30,000 crore by 2025-26. With ₹6.81 lakh crore allocated in the Union Budget, India is emerging as a global hub for aerospace manufacturing.

AI in the Exhibitions

This year’s Aero India exhibition was packed with high-tech displays and live demonstrations, offering a glimpse into India’s evolving defence capabilities. The exhibition runs from February 10 to 14, with the first three days dedicated to business interactions and the final two days open to the general public. 

A key attraction at the air show is the MBC2 Swarm Drone System, an AI-powered drone swarming capability that represents India’s growing expertise in autonomous aerial combat. 

The event also features an AI-powered mission planning and debriefing system, which uses real-time data analytics to enhance combat strategy and operational effectiveness. 

Bharat Electronics Limited (BEL) has showcased quantum cryptography, 5G defence solutions, unmanned warfare technology, space situational awareness systems, and theatre command systems. 

Advanced communication technologies such as the Ku Band Exciter, direct RF signal processing, and Digital Light Engine (DLE) are also on display. AI-driven innovations include generative AI-powered virtual assistants, AI-based language translation tools, and speech analysis systems.  

The AI voice command system introduced at the event aims to improve operational efficiency and pilot decision-making by integrating advanced automation with aircraft controls. This initiative aligns with India’s push towards self-reliance in defence technology and innovation.  

India Aims for Self-Sufficiency

Aero India 2025 is setting the stage for India’s technological leap in defence and aerospace. The showcase focuses on India’s Aatmanirbhar Bharat vision, which aims to reduce dependency on foreign technology while enhancing force multipliers for tri-service operations. 

The event also highlights India’s growing defence electronics and radar technology. A major focus is on Gallium Nitride (GaN) semiconductor solutions, which are key to the development of next-generation radars and electronic warfare systems. 

Moreover, the D4 Radar Anti-Drone System, designed to counter emerging UAV threats, is generating significant interest among international defence buyers.  

The exhibition featured aerobatic displays by the Indian Air Force (IAF) and showcased cutting-edge technologies, including developments from Indian start-ups at the iDEX pavilion.  

In his inaugural speech, Singh cited the development of high-tech products such as the Astra missile, the New Generation Akash missile, and autonomous underwater vehicles as examples of India’s growing capabilities.  

There has been a notable shift towards fully indigenous unmanned aerial vehicles (UAVs), with manufacturers integrating advanced AI to enhance surveillance, efficiency, and navigation in challenging conditions. 

This movement aligns with the government’s initiative to eliminate Chinese components from defence equipment. For instance, Delhi-based startup Enord showcased its Inspector Lite defence variant, a surveillance drone entirely free of Chinese parts. 

This 4.8-kg carbon fibre UAV features ‘Ease Link’ for operations beyond visual line of sight, a ‘Surround Sense’ detect-and-avoid system, onboard AI processing, real-time crowd detection, and swarm communication capabilities. 

Similarly, drone firm IdeaForge unveiled the NETRA 5, its latest surveillance drone equipped with dual payload systems. It uses AI-powered analytics to track people and objects and has GNS-denied operations, which allow it to return home even if jammed.

Space-based Defence Applications

The event also underscores India’s growing strength in space-based defence applications. The Vikram 1 space launch vehicle signals progress in the private space sector, while the Garuda Mission’s Miniaturised Multi-Payload Satellite advances tactical reconnaissance. 

GalaxEye, a Bengaluru-based aerospace startup, showcased ‘Drishti Mission’, the world’s first multi-sensor SAR + MSI Earth observation satellite. The satellite delivers high-resolution all-weather imaging and is equipped with a synthetic aperture radar (SAR) sensor and a multispectral imaging (MSI) sensor. 

Alongside them, Pixxel showcased its Firefly constellation, demonstrating hyperspectral imaging for defence, agriculture, and environmental monitoring. On January 15 this year, Pixxel also launched the first three satellites of this Firefly constellation aboard SpaceX’s Transporter-12 mission. The constellation offers the world’s highest-resolution hyperspectral imaging.

Aero India 2025 reinforces India’s leadership in next-generation defence technology, fostering global collaboration and indigenous innovation. Other key events include the Defence Ministers’ Conclave, the CEOs Roundtable, and the India and iDEX Pavilions, which highlight India’s growing defence ecosystem. 

The event, which strongly focuses on AI, automation, and space-based defence, accelerates India’s path to technological self-reliance. As it progresses, defence experts and policymakers see India strengthening global partnerships and advancing its role in aerospace innovation for a secure future.

The post Aero India 2025 is All About Aatmanirbhar Bharat with DRDO’s Next-Gen Tech appeared first on Analytics India Magazine.

]]>
Are Regulatory Delays Slowing Down the Indian Drone Revolution? https://analyticsindiamag.com/deep-tech/are-regulatory-delays-slowing-down-the-indian-drone-revolution/ Mon, 10 Feb 2025 06:17:00 +0000 https://analyticsindiamag.com/?p=10163096

One of the primary regulatory challenges the industry faces is restricted airspace access.

The post Are Regulatory Delays Slowing Down the Indian Drone Revolution? appeared first on Analytics India Magazine.

]]>

India’s drone industry is awaiting a revolution with the potential to transform sectors like agriculture, infrastructure, and security. However, regulatory hurdles and slower approval processes continue to keep this industry grounded. 

In an insightful conversation with AIM, Skylark Drones co-founder and CEO Mughilan Thiru Ramasamy gave first-hand insights into the impact of these delays on innovation and business growth. 

Drones could revolutionise infrastructure monitoring, agriculture, law enforcement, and disaster response. However, he revealed that the company still doesn’t have permission to fly drones in some regions of Bengaluru.

As of September last year, 10,208 type-certified commercial drones have been registered under the Digital Sky Platform, a digital system for managing drone operations in India, as per MoS civil aviation, Murlidhar Mohol, in a recent Rajya Sabha Q&A session. 

The Directorate General of Civil Aviation (DGCA) has issued 96 type certificates for different drone models based on their purpose. Of these, 65 models are designed for agricultural applications, while 31 are focused on logistics and surveillance. 

These figures highlight the growing adoption of drone technology, particularly in agriculture, where drones are increasingly used for crop spraying, monitoring, and yield assessment.  

Regulatory Roadblocks

Despite a series of policy reforms aimed at streamlining drone operations, challenges in obtaining clearances and navigating airspace restrictions have created bottlenecks that are slowing down innovation and adoption.

“To fly a drone in a city, you need approval from multiple agencies – HAL airport, CISF, the Commissioner’s office, and so many others,” Ramasamy said. Despite government initiatives, getting approvals for drone operations remains an uphill battle. 

One of the primary regulatory challenges the industry faces is restricted airspace access. Under The Drone Rules 2021, India’s airspace is divided into three categories: red, yellow, and green zones. While 86% of the country’s airspace falls under the green zone, where drone operations do not require special permissions, the remaining areas are heavily regulated. 

Red zones, totalling approximately 9,969, require special approvals from the civil aviation ministry and the concerned zone authorities before any operations can take place. Yellow zones, typically located around airports, require permission from air traffic control (ATC) before drone operations can commence. 

This zoning system, while essential for safety, has created delays in obtaining necessary approvals, especially in urban areas where drone-based services like e-commerce deliveries, medical supply transport, and infrastructure monitoring could be transformative.  

The biggest challenge is that there is no central platform where one can apply for permission. “Everything still runs on pen and paper, or at best, email, which is just digital paper,” Ramasamy explained. 

Bodhisattwa Sanghapriya, founder and CEO of IG Drones, told AIM that while the government has made progress in easing compliance, faster clearances for trusted domestic players will further accelerate industry growth. 

By prioritising reliable drone manufacturers and solution providers, India can strengthen national security while enhancing the country’s capabilities in surveillance, infrastructure monitoring, and disaster response.

The regulatory landscape for drones in India has improved significantly, with the government actively streamlining approval processes and promoting indigenous technology. 

“Although some operational challenges remain, particularly in securing approvals for sensitive zones such as defence areas and no-drone zones, the regulatory mechanism is much more streamlined than before,” Sanghapriya added.

He further said that compared to previous years, regulatory delays have been reduced, particularly for startups manufacturing 100% made-in-India drones with no Chinese components. This aligns with the government’s vision for Atmanirbhar Bharat and its push to make India a global drone hub by 2030.    

Supply vs Demand: The Real Issue

The regulatory delays don’t just affect drone startups; they impact enterprises, government projects, and the broader ecosystem. Ramasamy pointed out that while India has focused on incentivising drone manufacturing, real growth will only happen when demand is created. 

“More than subsidies, the government needs to create real use cases that push adoption,” he added. Until then, navigating the regulatory maze will remain one of the biggest roadblocks to India’s drone revolution.

Even though the Production Linked Incentive (PLI) Scheme for drone and drone components did not see any allocation in the recent Union Budget 2025, the government has prioritised funding for the space tech industry as a whole.

Notably, the government has allocated ₹676.85 crore to the Namo Drone Didi program as part of its Central Sector Schemes. Regardless, the industry has yet to take centre stage in the Budget.

While China’s DJI dominates the global drone market with fully integrated unmanned aerial vehicle (UAV) systems, India continues to struggle with roadblocks despite both countries having started on similar grounds for innovation.

What is the Government Doing?

The government has taken several steps to ease the regulatory burden on drone operators. More recently, in August 2024, the government amended The Drone Rules to simplify the registration process by removing the requirement for a passport. 

“Now, a government-issued proof of identity and address, i.e. Voter ID, Ration Card or Driving License, can now be accepted for registration and de-registration or transfer of drones,” Mohol explained.

Despite these improvements, policy bottlenecks remain a concern. For instance, drone-based delivery services, which have the potential to improve healthcare access in remote areas, face operational delays due to lengthy bureaucratic approvals. 

Similarly, drone surveying and mapping in the infrastructure sector require clearances from multiple authorities, leading to project slowdowns.  

As per Mohol, the government claims to be working towards addressing these challenges. One significant safety measure implemented is the requirement for all certified drones to have a tamper-avoidance mechanism that protects both the firmware and hardware from unauthorised access. This ensures that drones used in critical sectors remain secure and resistant to hacking.

However, for India to fully harness the benefits of drone technology, further reforms are needed. The Digital Sky platform must be enhanced to enable real-time digital approvals for operations in restricted zones. 

Additionally, expanding financial incentives and promoting drone adoption in sectors beyond agriculture will be key to unlocking new opportunities. 

The post Are Regulatory Delays Slowing Down the Indian Drone Revolution? appeared first on Analytics India Magazine.

]]>
This Hyderabad Startup is Building India’s First AI Lab in Orbit https://analyticsindiamag.com/deep-tech/this-hyderabad-startup-is-building-indias-first-ai-lab-in-orbit/ Sat, 08 Feb 2025 05:50:16 +0000 https://analyticsindiamag.com/?p=10163050

The startup’s next step is launching two fully operational satellites this year, with an ambition to build in-orbit computing.

The post This Hyderabad Startup is Building India’s First AI Lab in Orbit appeared first on Analytics India Magazine.

]]>

TakeMe2Space, a Hyderabad-based space-tech startup, is aiming to make space more accessible by launching India’s first AI-driven space laboratory. Founded by Ronak Kumar Samantray, the company is working to change the way people interact with satellites. 

Unlike traditional models where satellite access is restricted to governments, defence agencies, or elite research institutions, TakeMe2Space wants to democratise space, offering real-time access to satellites for students, researchers, and businesses alike.

“Our goal is to ensure that everybody’s ideas can be taken to space,” Samantray told AIM in an exclusive interview. “You don’t have to be in NASA, ISRO, or an IIT to run an experiment in space. Sitting in Kerala, Delhi, or even Antarctica, you should be able to operate a satellite.”

The company recently conducted a technology demonstration mission with ISRO, proving the viability of its approach. Now, the next step is launching two fully operational satellites this year, with a long-term ambition to build the future of computing in orbit.

Samantray (second from the left), along with former chairman of ISRO, S. Somanath

As of last year, the Indian space economy was valued at approximately $8.4 billion, constituting a 2% share of the global space market. The country currently operates 56 active space assets, including 19 communication satellites, nine navigation satellites, four scientific satellites, and 24 earth observation satellites, as per the economic survey 2024-25. 

With the government aiming to scale the space economy to £44 billion by 2033, inclusive of £11 billion in exports, which would represent 7-8% of the global share, TakeMe2Space believes that accessing space should be as simple as logging into a cloud computing service.  

Space Can be Hands-On for the Next Generation

Samantray’s motivation for TakeMe2Space comes from his background in computer science. Growing up, he had easy access to computers, which nurtured his love for coding. However, he observed that space has remained largely inaccessible to young minds. 

“If you’re interested in space, the most you can do today is read a research paper or maybe play with an electronics kit,” he explained. “Nobody gets to task their own satellite.”

TakeMe2Space aims to bridge this gap by offering an AI-powered satellite lab. Schools and universities can subscribe, allowing students to log in remotely, upload code in Python or C++, and interact with a real satellite. 

“Just like how schools have computer labs, electronics labs, and robotics labs, we believe there should be a satellite lab,” said Samantray. “Our satellites will be openly accessible for students to run their personal experiments.”

Samantray (left) at the Sriharikota launchpad.

So far, the education sector has shown interest, but surprisingly, most early adopters are not universities. Out of 20 customers who have signed up, only four are from the education sector, while the remaining 16 are from GIS (geographic information systems) and data analytics companies.

From a business point of view, the company provides “for ₹20,000 in the pricing, 90 minutes of the satellite time in orbit.”

TakeMe2Space payload on board the ISRO SpaDeX Mission

AI in Space is More Than Just Data Collection

The integration of AI in TakeMe2Space’s model is a key differentiator. Traditionally, satellites capture raw data, which is then processed on Earth. But AI-driven satellites can process images in orbit, making decisions on what data to collect and download.

One experiment conducted on TakeMe2Space’s AI lab by the University of Southampton involved using a low-power AI algorithm to reduce motion blur in satellite images. “A satellite moves at 7 km per second, so capturing a clear image is a challenge,” said Samantray. “Instead of using traditional pointing and staring techniques, AI can remove motion blur in real-time.”

Samantray and the team working on the payload at TakeMe2Space.

AI also allows for real-time object detection and change detection. This means satellites can prioritise what images to capture and transmit, reducing unnecessary data transmission and saving bandwidth. 

“Our aim is not just to provide satellite data,” Samantray emphasised. “We want to give users control of the satellite itself.”

Security and Ethical Concerns

With TakeMe2Space offering satellite access to a broader audience, concerns about data security and ethical usage naturally arise. 

Allowing individuals and businesses to task satellites in real time raises questions about privacy, misuse, and cybersecurity risks. Samantray acknowledged these challenges and detailed the safeguards the company has implemented.  

“We are enabling people to control a satellite, which means we have to be two steps ahead in terms of security,” he said. “Our system is designed to preemptively stop any harmful actions before they occur, rather than reacting after the fact.”  

To prevent unauthorised activities, TakeMe2Space does not equip its satellites with propulsion systems, ensuring they cannot be hijacked and redirected toward other objects.

Additionally, the company has capped the resolution of its satellite imagery at no finer than 5 meters per pixel, preventing privacy violations. “Even if something goes wrong, no one can use our satellite for surveillance or for interfering with other space objects,” Samantray assured.  

On the data front, TakeMe2Space follows strict encryption protocols. Customers retain ownership of the data they generate, and the company does not store or claim rights over it. “We are like an infrastructure provider, similar to how AWS doesn’t own the applications running on its servers,” he explained.   

Building the Future of Space Computing in India

As reiterated by the founder, TakeMe2Space does not intend to compete with conventional Earth observation firms. Rather, it envisions a future in which computing transitions to space. 

“We’re not here to be another Earth observation company,” said Samantray. “We want to build data centres in orbit where AI and computing happen in space, not on Earth.”

By shifting heavy computation tasks to satellites, TakeMe2Space aims to reduce Earth’s power consumption. The world’s increasing reliance on AI, data storage, and cloud computing is driving exponential energy demand. 

Running AI models in space, where temperatures are extremely cold and heat dissipation is more efficient, could be a long-term solution.

“Space gives you a very controlled and predictable temperature environment, and whatever heat you generate up there has no impact on Earth’s atmosphere. The absolute temperature of any point in space is 4 Kelvin, so as much heat as you generate, it absorbs the heat, which will just be a point of heat for space.”

Looking ahead, TakeMe2Space hopes to scale its model, expanding beyond AI labs to full-fledged space computing infrastructure. The company is not reliant on government funding but sees the private sector as its primary market. “We’re building for private businesses, not just defence or government customers,” Samantray clarified.

If it succeeds, the startup may redefine global interactions with satellites, making space an accessible laboratory for everyone.

The post This Hyderabad Startup is Building India’s First AI Lab in Orbit appeared first on Analytics India Magazine.

]]>
ABB is Shaking Things Up with Tangible AI Solutions https://analyticsindiamag.com/deep-tech/abb-is-shaking-things-up-with-tangible-ai-solutions/ Thu, 06 Feb 2025 04:40:53 +0000 https://analyticsindiamag.com/?p=10162907

One of ABB’s most impressive achievements is in robotic vision and navigation.

The post ABB is Shaking Things Up with Tangible AI Solutions appeared first on Analytics India Magazine.

]]>

While many companies are caught up in the AI hype cycle, ABB, an industrial robot supplier and manufacturer, has been building and implementing practical AI solutions for nearly a decade. 

In an interview with AIM, Sami Atiya, president of the robotics and discrete automation at ABB, expressed how the company has taken a measured, value-driven approach to AI innovation, which is delivering results across multiple industries.

“We at ABB had our first research done in AI more than a decade ago, in 2014. It’s already implemented in many of the systems we use today,” noted Atiya. This long-term perspective has helped ABB distinguish between AI hype and genuine technological maturity.

This, according to him, is a critical approach considering the cycles of inflated expectations and subsequent “AI winters” that have shaped technological development over the past few years.

Different Approach to AI Implementation

Rather than creating centralised AI teams or pursuing grand projects, ABB has adopted a distributed, customer-centric approach. “What we learned is we don’t drive technology from the top of central needs. We drive it from customer needs,” Atiya explained. 

The company maintains an AI Council that coordinates activities, manages an AI repository, and oversees education initiatives while allowing individual teams to develop solutions based on specific customer requirements.

This approach has allowed ABB to categorise and track projects across the company, distinguishing between implemented solutions, pipeline developments, and exploratory ideas. This method, Atiya said, not only ensures that promising concepts are nurtured but also avoids the pitfall of investing in ideas that may not materialise. 

Over the last decade, ABB has expanded its AI portfolio to include over 250 projects, many of which are already delivering tangible results. “Most of these projects here are available for purchase today,” Atiya said.

Real-World AI Applications

One of ABB’s most impressive achievements is in robotic vision and navigation. The company has developed AI systems that allow robots to recognise and handle objects they’ve never encountered before. “What our research has done is that we now have a neural network that can recognise the shape of the object that it has not seen before,” explained Atiya.

Another groundbreaking implementation is in factory navigation. Using the Visual SLAM navigation technology, powered by AI and 3D visual detection, robots can now navigate complex factory environments without requiring physical guides or markers. 

“The robot actually goes around, figures out where it is, and then starts creating a map… You put another robot in, they talk to each other, and they learn,” Atiya described this advancement.

AI’s Role in Sustainability and Workforce Evolution

Sustainability is a cornerstone of ABB’s AI strategy. This was highlighted during a panel discussion led by Sara Larsson, CEO at the Swedish Chamber of Commerce India, featuring leading AI experts like Khushaal Popli, program director, IIT Bombay; Kishan Sreenath, VP, Powertrain, VolvoGroup; and Kaushik Dey, head of research, Ericsson. 

Panel discussion at ABB, Bengaluru campus. (From left to right) Sara Larsson, Kaushik Dey, Kishan Sreenath, Khushaal Popli, Sami Atiya, and Subrata Karmakar.

AI-powered solutions like building analysers optimise energy consumption by integrating weather forecasts, operational data, and energy patterns. These efforts not only improve efficiency but also support global sustainability goals.

As industries evolve, so too must their workforces. ABB invests significantly in upskilling its employees by combining AI expertise with engineering knowledge. 

Atiya also shared insights into ABB’s hackathons and training programs, including a recent initiative in India that trained 2,000 employees on AI on the same day and generated over 200 new AI use cases. 

He explained this as a compact way of reinforcing and energising the teams. “It’s not just about hiring AI experts; it’s about expanding the capabilities of our existing teams,” he remarked.

ABB’s Strategic Upskilling and Recruitment

ABB’s leadership in AI extends beyond technological advancements to strategic talent acquisition and workforce development. With over 10,000 employees in India, ABB leverages the country’s exceptional talent pool across engineering and software domains. 

According to Atiya, ABB recruits top-notch professionals while maintaining a low attrition rate, owing to its strong reputation and focus on employee growth and education. “We like to keep our employees,” Atiya said.

However, the company’s strategy isn’t limited to external hiring; upskilling its existing workforce is a key priority. He emphasised the importance of blending AI expertise with engineering disciplines like mechatronics to foster innovation. 

This approach ensures ABB’s teams are equipped with both technical knowledge and domain-specific expertise, which is critical for solving industry challenges. “It’s not about hiring AI experts alone; it’s about expanding the capabilities of our own people,” Atiya highlighted. 

By cultivating multidisciplinary teams and prioritising lifelong learning, ABB is building a workforce ready to lead industrial transformation. This reaffirms its commitment to people as its greatest strength.

In addition, Atiya also emphasised at this year’s World Economic Forum in Davos, “Like robotics, AI will lead to new jobs and change the way we work. We must inspire innovation and emphasise the importance of learning and upskilling to realise its benefits.”

Synthetic Data and AI Limitations

ABB’s success is built on collaboration. To foster innovation, it works with startups, universities, and technology leaders. Partnerships like its acquisition of Sevensense for advanced robot navigation and ongoing collaborations with IIT Bombay are vital to scaling breakthroughs.

Atiya was candid about the challenges of AI, particularly the risks of bias and data misalignment. 

He stressed the importance of synthetic data in addressing the shortage of real-world training data but warned of the risks of amplifying existing biases if quality controls are inadequate. 

He also acknowledged that while generative AI and LLMs have potential, they face limitations

The Future of Human-Machine Collaboration

Atiya sees natural language interaction as the next frontier for human-machine collaboration. ABB is pioneering systems that enable robots to understand complex verbal commands, such as arranging objects based on human instructions. 

“In the past, we had to learn the language of machines. In the future, machines will learn ours,” he noted. This focus on human-centric AI aligns with ABB’s broader mission of enhancing human capabilities, not replacing them.

The post ABB is Shaking Things Up with Tangible AI Solutions appeared first on Analytics India Magazine.

]]>
Will Union Budget 2025 Lift Off India’s Space Mission? https://analyticsindiamag.com/deep-tech/will-union-budget-2025-lift-off-indias-space-mission/ Fri, 31 Jan 2025 12:30:00 +0000 https://analyticsindiamag.com/?p=10162619

In the previous Budget, finance minister Nirmala Sitharaman announced an impressive allocation of INR 1,000 crore, giving the space technology a boost.

The post Will Union Budget 2025 Lift Off India’s Space Mission? appeared first on Analytics India Magazine.

]]>

India’s space industry is poised for a major boost with the upcoming Union Budget. The success of Chandrayaan-3, which made India the fourth country to land on the moon, and the rise of private space startups have charged the sector. 

With ISRO’s ambitious roadmap and increasing global interest in India’s cost-effective space solutions, this Budget could be a defining moment for the sector. 

In the previous Budget, finance minister Nirmala Sitharaman announced an impressive allocation of INR 1,000 crore, giving the space technology a boost. “[Something like] this will certainly help space-tech companies looking for the much-needed early-stage capital to get started,” Anil Joshi, managing partner at Unicorn India Ventures, said. 

Manoj Agarwal, managing partner at Seafund, said, “As a deep tech-focused VC fund, the FM announcing Rs 1,000 crore space economy VC fund and R&D fund of Rs 1 lakh crore will work as a strong catalyst for startups in deep tech and space tech.” 

Over the Past Year…

India has made major progress in its space dreams over the past year. As of last year, the Indian space economy was valued at approximately $8.4 billion, constituting a 2% share of the global space market. The government envisioned scaling the space economy to $44 billion by 2033, including $11 billion in exports amounting to 7-8% of the global share. 

India currently operates 56 active space assets, including 19 communication satellites, nine navigation satellites, four scientific satellites, and 24 earth observation satellites. Recently, it launched the GSAT-20 satellite in collaboration with SpaceX. 

The Cabinet also approved the Gaganyaan follow-on mission, which will pave the way for the establishment of the first module of the Bhartiya Antariksh Station; the Chandrayaan-4 Lunar Sample Return Mission; the Venus Orbiter Mission; and the development of the Next Generation Launch Vehicle. 

At the AWS Summit, Clint Crosier, the director of the AWS Aerospace and Satellite business, called India the next space technology hub. He expressed that AWS sees India as a significant growth market and plans to invest 12.7 billion in cloud infrastructure in India by 2030.

Amidst this, a number of Indian space startups took off, quite literally. These covered a wide range of industries, including Earth observation applications (Pixxel), space-based data analytics (Bellatrix Aerospace), satellite manufacturing (Agnikul Cosmos), and launch vehicle development (Skyroot Aerospace).

The Indian Space Research Organisation (ISRO) saw the appointment of V Narayanan as the new chairman of the organisation and secretary of the Department of Space. He succeeded S Somanath, who retired after a stellar tenure. 

ISRO also announced the successful completion of its SpaDeX (Space Docking Experiment) mission, launched on December 30, 2024, from the Satish Dhawan Space Centre in Andhra Pradesh’s Sriharikota. This made India the fourth in the world to achieve space docking alongside the United States, Russia, and China.

This is the same launch pad at which ISRO just completed its 100th launch of the GSLV-F15 / NVS-02 Mission. 

Under the SpaDeX mission, many Indian space startups launched payloads and took charge of leading India’s space mission. These included experiments from Mumbai’s Manastu Space Technologies Private Limited, Bengaluru’s Bellatrix Aerospace Pvt Ltd and GalaxEye Space Solutions Private Limited, Andhra Pradesh’s N Space Tech, Hyderabad’s TakeMe2Space, and Ahmedabad’s PierSight Space.

Additionally, on January 15 this year, Pixxel, a Bengaluru-based aerospace startup, launched the first three satellites of its Firefly constellation. These hyperspectral satellites, integrated via Exolaunch and launched aboard SpaceX’s Transporter-12 mission, offer the world’s highest-resolution hyperspectral imaging

This breakthrough enhances climate monitoring, resource management, and environmental analysis with unprecedented precision.

Here’s AIM’s previous interview with Pixxel:

A Promising Future

Yashas Karanam, co-founder & COO of Bellatrix Aerospace expressed hopes for a PLI scheme to incentivise space tech companies and optimise expenses. He also emphasises the need for government contracts. 

“Can industries actually spearhead the whole thing and make the entire constellation themselves? It could be a civilian satellite or a different satellite constellation. These kinds of individual budgets are going to really start companies to think about investing in these sectors,” he told AIM.

Another aspect he predicted might arise is the government becoming a customer for some of the products. This sentiment also resonated with Ankit Anand, founding partner at Riceberg Ventures (which invests in deep tech). He told AIM, “If the government becomes a customer, then every investor will know that this startup is able to really sell this thing at a scale.”

Gaurav Seth, co-founder & CEO at PierSight, highlighted to AIM the need for enhanced tax credits and funding to drive innovation, attract investment, and position India as a global leader in space exploration and satellite technology.

“Currently, companies can claim deductions for R&D under Section 35(2AB) of the Income Tax Act, but a higher deduction (e.g., 150%-200%) should be allowed for space R&D spending to incentivise long-term innovation,” he says.

Ronak Kumar Samantray, founder of TakeMe2Space, is confident that government support for the space sector is not just expected—it’s inevitable. 

Reflecting on India’s space policy, he believes it is among the best in the world, allowing private companies to hire talent from across the globe, unlike restrictive policies in countries like the US.

“For me, what’s exciting about the Budget is to figure out that new thing that the government decides to do right,” Samantray told AIM, highlighting his company’s belief that ‘space is for everyone’.

Samantray highlighted that India’s Make in India initiative played a crucial role in enabling space tech advancements. Without the earlier push for manufacturing, sectors like precision engineering and satellite production wouldn’t have the skilled workforce needed today. 

For India’s space tech sector, the Budget isn’t just about numbers. It’s about the direction the government sets, and if the past is any indication, space will continue to receive strong momentum, he believes.

Not Just Satellites But Drones as Well

Looking at learning from the past, Mughilan Thiru Ramasamy, co-founder and CEO at Skylark Drones, a startup shaping India’s UAV (Unmanned Aerial Vehicle) ecosystem, shared his insights into the advancement of the drone industry in the upcoming Budget.

With the rapid advancements in aerospace and semiconductors, he believes that drones will see increased Budget allocations. “I think the Drone Didi program will get some budget because they will try to create employment,” he told AIM in an interview. 

According to Mughilan, one key area of focus is real-time governance. He points to Telangana’s proactive approach, where the state is already integrating drones into administrative operations. 

However, he emphasises that beyond just funding drone technology, the government should focus on demand creation rather than just supply-side incentives. A strategic push to increase industry adoption of drones, whether in agriculture, infrastructure, or surveillance, will drive innovation, job creation, and industry growth.

Ankit Mehta, CEO of ideaForge Technology Limited, expressed particular optimism about the potential launch of a Production-Linked Incentive (PLI) Scheme 2.0 for drones. 

“While the reported ₹500 crore outlay is expected to extend beyond manufacturing to include services like drone leasing, Indigenous software development, and counter-drone systems, given the potential of the technology and the industry, we need a much larger scheme amounting to ₹1,000-2,000 crore to unlock this opportunity over at least five years,” he said.

Beyond PLI 2.0, he also stressed the need for a dedicated R&D fund, like Seth, to drive innovation, technological advancements, product development, and exports. 

“This Budget has the potential to cement India’s position as a global leader in drone technology, driving long-term growth, creating high-value jobs, and ensuring technological sovereignty,” Mehta said. 

For the drone sector, the upcoming Budget isn’t just about subsidies or schemes, it’s about the larger vision. A strong emphasis on demand-driven policies could position India as a global leader in drone technology while fostering sustainable economic growth.

The post Will Union Budget 2025 Lift Off India’s Space Mission? appeared first on Analytics India Magazine.

]]>
Will Upcoming Budget Deliver on India’s Semiconductor Needs? https://analyticsindiamag.com/deep-tech/will-upcoming-budget-deliver-on-indias-semiconductor-needs/ Fri, 31 Jan 2025 04:51:47 +0000 https://analyticsindiamag.com/?p=10162565

INR 6,903 crore was announced for the semiconductor sector in the Union Budget 2024.

The post Will Upcoming Budget Deliver on India’s Semiconductor Needs? appeared first on Analytics India Magazine.

]]>

As India positions itself as a global hub for semiconductor manufacturing, all eyes are on the Union Budget 2025. Discussions across all industries in India have intensified, with the semiconductor sector being no exception. Following significant allocations to key technology missions in the previous Budget, the semiconductor industry anticipates its moment in the spotlight this year.  

Over the past year, India’s ambition to be a global semiconductor hub has gained momentum, particularly with the government’s Production Linked Incentive (PLI) schemes and investments in R&D. These efforts aim to reduce reliance on imports and foster a robust domestic semiconductor ecosystem, aligning with global supply chain resilience trends.  

Many industry leaders have voiced their expectations, focusing on incentives for setting up fabs, expanding talent pools, and addressing critical gaps in semiconductor manufacturing infrastructure.  

The sector is also looking forward to measures that will ensure strategic partnerships with international players, enable indigenous chip design innovation, and provide tax breaks to stimulate private investment.  

As the government unveils its vision in the upcoming Budget, the semiconductor industry could play a defining role in advancing India’s goals of self-reliance and economic growth.  

Semiconductors in Last Year’s Budget

The Union Budget 2024 introduced pivotal measures for the semiconductor sector. To develop the semiconductor and display manufacturing ecosystem, a significant allocation of INR 6,903 crore – more than double the previous year’s allocation of INR 3,000 crore – was announced.  

Key provisions included 50% fiscal support for setting up semiconductor and display fabs, as well as support for compound semiconductors, silicon photonics, sensors fabs, and ATMP (assembly, testing, marking, and packing) and OSAT (outsourced semiconductor assembly and test) facilities. 

These measures aimed to strengthen India’s semiconductor supply chain and reduce reliance on imports. 

On the contrary, there have been previous conversations about the Budget bar being too low. In an interview with AIM last year, V Ramgopal Rao, group vice-chancellor of BITS Pilani Campuses, said, “The next government looks at the last Budget and actual utilisation and adds some 5% to that, but the base is already very low.”

Additionally, due to geopolitical shifts and US sanctions, India must urgently secure strategic autonomy by developing its own semiconductor IP and products. 

Ajai Chowdhry, co-founder of HCL and chairman of the National Quantum Mission, emphasised that by designing indigenous chips, the country can safeguard against future global trade restrictions and technological sanctions. 

With most chips still imported, the government must prioritise Indian-made high-quality chips for the nation. “We suggested the government provide a list of 30 chips and 30 priority products that should be developed and manufactured in India.” 

The EPIC Foundation has proposed an INR 44,000 crore allocation in the Budget, with INR 15,000 crore earmarked for system products and INR 11,000 crore for semiconductor products. Moreover, the central and state governments have allocated INR 90,000 crore for capital expenditure. This highlights India’s growing commitment to semiconductor self-sufficiency.

“We are already establishing five semiconductor plants across the nation, with more planned. It is our request to the government, ministry, as well as the finance minister to look at the proposal very critically as this has become much more urgent and important due to the new regulations coming in from the US,” Chowdhry stated.  

Key Recommendations From IESA This Year

As India continues its semiconductor and manufacturing journey, industry leaders like the India Electronics and Semiconductor Association (IESA) have shared crucial recommendations for the Union Budget 2025-26. 

Ashok Chandak, president of IESA, highlighted the importance of expanding existing initiatives and introducing targeted measures to ensure long-term sustainability and competitiveness.  

“The Semicon India Program and ISM have delivered significant contributions to GDP growth, job creation, foreign investments, industrial self-reliance, and bolstering India’s position in the global semiconductor market,” Chandak emphasised.  

The IESA’s proposal includes extending the PLI scheme with an additional $20 billion over five years, supplementing the existing INR 76,000 crore. This would support the growth and innovations of industry projects and Aatmanirbhar Bharat Abhiyan along with the Viksit Bharat 2047 initiative.

Chandak has proposed a stricter PLI framework is recommended, ensuring 25% local value addition by 2025-26 and 30% by 2027. 

Moreover, $5 billion in incentives is proposed for the electronics components industry. To foster innovation, IESA advocates allocating INR 10,000 crore for industry-driven R&D through a PPP model.  

Role of Data Infrastructure 

As India accelerates its digital transformation across sectors like finance, retail, and education, the demand for advanced and scalable data infrastructure continues to grow. 

Sunil Gupta, co-founder and CEO of Yotta Data Services, underscores the critical role of data centres and AI technologies in supporting this evolution, particularly with initiatives like Digital India and the IndiaAI Mission gaining momentum.  

“Investments in sovereign infrastructure, including data centres and AI-driven technologies, will not only bolster the nation’s tech status but also attract substantial private-sector investment,” Gupta stated to AIM.  

He further highlighted the importance of the Union Budget prioritising measures such as advancements in GPU and semiconductor technologies. This focus, Gupta believes, will propel growth in data centres and AI industries and position India as a leader in the global digital economy.

Shrirang Deshpande, strategic program head at Vertiv India, said, “We anticipate measures supporting the rise of data centres as the Union Budget draws near, such as incentives for integrating green energy, simplified regulations for expanding infrastructure as well as initiatives to improve connectivity in Tier-2 and Tier-3 cities and generating employment opportunities.”

While also highlighting this, Chris Miller, the author of Chip War, identified talent and infrastructure as two key challenges in India’s path to progress in the chip space. 

As countries like the US and China forge ahead with advanced 3–5 nm chip production, India grapples with foundational challenges. 

According to Miller, the time needed to develop infrastructure is a key challenge, particularly for specialised materials, chemicals, and tools essential to semiconductor manufacturing. However, experts warn that funding alone won’t resolve long-standing technological and infrastructural gaps.  

While optimistic about progress, Miller also cautioned that achieving full-scale capacity could take a decade, though efforts to build the necessary infrastructure are already underway.

The post Will Upcoming Budget Deliver on India’s Semiconductor Needs? appeared first on Analytics India Magazine.

]]>
US Export Rules Put India’s Chip Industry at Crossroads  https://analyticsindiamag.com/deep-tech/us-export-rules-put-indias-chip-industry-at-crossroads/ Tue, 21 Jan 2025 07:51:26 +0000 https://analyticsindiamag.com/?p=10161871

India has found itself excluded from the privileged list of 18 allied countries granted unlimited access to cutting-edge American AI chips.

The post US Export Rules Put India’s Chip Industry at Crossroads  appeared first on Analytics India Magazine.

]]>

A few months ago, Indian Prime Minister Narendra Modi, while addressing the Indian diaspora at a community event in New York, offered a unique perspective on AI. While the world associates AI with Artificial Intelligence, PM Modi believes that AI also stands for ‘America and India’.

While he stressed the thriving partnership between the two countries, on January 13, the US sent shockwaves through the global tech industry when the then Joe Biden and Kamala Harris administration announced a regulatory framework for the responsible diffusion of advanced AI technology. 

The framework created distinct tiers of access. While nations like Germany, South Korea, Japan and 15 other allies have received unrestricted access to cutting-edge American AI chips, others like China, Russia, and North Korea faced a complete block. 

Notably, India found itself excluded from the privileged list of 18 allied countries granted unlimited access, instead being placed among the third-tier destinations requiring explicit licensing.

The new rules for chip production and development provide flexibility with two key allowances. Companies can import up to 1,700 GPUs, valued at around $40-50 million, without needing a licence. 

However, larger imports, worth up to $1 billion, will require a licence review. This approach makes sure that smaller imports are easier to access while maintaining stricter oversight for larger purchases.

What Happens to India Now?

While this raises questions about India-US relations, it could also impact the India Semiconductor Mission (ISM). This could create a complex new reality for India’s tech ambitions and boost its global presence, as countries like China have no access to this technology. 

The rule change triggered widespread international discussions as it updated controls for “advanced computing chips” and mandated authorisations for exports, re-exports, and in-country transfers involving a wider range of countries.

Ashok Chandak, president of the India Electronics and Semiconductor Association (IESA), said, “In the short term, the impact is expected to be minimal.” The long-term implications, however, deserve careful consideration.

Meanwhile, Ajai Chowdhry, who helped build one of India’s earliest tech success stories as the co-founder of HCL, sees this as a pivotal moment. 

Drawing parallels to India’s past, Chowdhry noted, “This brings us back to the old licensing regime.” “We faced similar export controls on space and atomic energy…and we then emerged out of that successfully, getting our brilliant engineers to create our own technologies.”

Chandak noted that large AI data centers, which need hundreds of thousands of GPUs, could face delays or reductions in scale, potentially giving global companies a competitive edge over Indian firms.

However, he also saw a silver lining. “Small-scale setups could still enable experimentation, innovation, and restricted model development.”

India Can Create Its Own NVIDIA

According to Chris Miller, the author of Chip War: The Fight for the World’s Most Critical Technology, NVIDIA is “by far the most important” company in the semiconductor industry. 

“I am sure most of the design of NVIDIA and AMD GPUs must have been done in India by Indian engineers,” Chowdhry said. This makes the current situation of US sanctions particularly interesting.

Recently, Miller praised India’s deep talent pool in chip design during an interview with AIM

India already has a promising path forward through its homegrown RISC-V technology, which was developed at IIT Madras. Unlike other chip technologies that require expensive licensing fees, RISC-V is open-source, which means that India can freely use and modify it to build its advanced chips.

“This is the new Unipolar world, where every country is on its own,” Chowdhry observed. 

To turn this challenge into an opportunity, he advocated for a practical approach: boost government funding for chip design through the ISM and make it accessible to companies of all sizes. 

“Maybe we can create our own NVIDIA and AMD in the next ten years!” he suggested. Notably, increased funding from ₹50 crore to ₹150 crore could help Indian companies, from established corporations to innovative startups, drive this transformation.

Moreover, in a recent post, NVIDIA strongly criticised this rule and argued that the regulation threatens to undermine US leadership in AI and stifle innovation worldwide.

India’s Current Semiconductor Landscape

India’s semiconductor landscape is undergoing significant transformation, marked by substantial investments, strategic initiatives, and emerging challenges. 

Global industry leaders are recognising India’s potential. The government’s commitment to fostering a robust semiconductor ecosystem is evident through the ISM. 

However, the industry continues to face challenges. Despite producing approximately 1.5 million engineering graduates annually, there are concerns about the specialised skills required for semiconductor design and manufacturing. 

Experts highlight the need for sustained investment in both technical and business talent to bridge this gap. Besides, India’s semiconductor industry has traditionally been ‘fabless’, with designs conceived domestically but manufactured overseas. 

This reliance on external fabrication poses security risks within the supply chain. Efforts are underway to establish domestic fabrication facilities, with companies like PSMC and Tata Electronics planning to build fabs in India by 2026. 

Hence, while India is making strides in the semiconductor sector through strategic investments and initiatives. 

Going Forward

Looking ahead, India might have some flexibility in accessing advanced chips through a programme called the National Validated End-User (NVEU). 

This is because India faces a unique situation. It doesn’t re-export high-end chips (like Compute ICs) or make advanced chips itself, but it does have major design centres for big companies like NVIDIA and AMD. 

This could make it easier for India to get approval for chip licences under this programme. 

The programme also sets limits on how many high-performance chips (like H-100 GPUs) India can receive each year. These limits will grow over time, with fewer than 1 lakh GPUs in 2025, increasing to 2.7 lakh units in 2026 and 3.2 lakh units by 2027.

This gradual increase allows for controlled access to advanced chips over the next few years.

The post US Export Rules Put India’s Chip Industry at Crossroads  appeared first on Analytics India Magazine.

]]>
India’s Semiconductor Dream a Decades-Long Pursuit, Says Chip War Author https://analyticsindiamag.com/deep-tech/indias-semiconductor-dream-a-decades-long-pursuit-says-chip-war-author/ Mon, 20 Jan 2025 07:07:14 +0000 https://analyticsindiamag.com/?p=10161771

“It’s not going to go from an initial level to Taiwan’s level overnight,” Chris Miller, author of Chip War, said.

The post India’s Semiconductor Dream a Decades-Long Pursuit, Says Chip War Author appeared first on Analytics India Magazine.

]]>

India’s semiconductor aspirations have captured global attention with investments from companies and investors around the world. The country is setting ambitious goals to become a major player in the chip manufacturing landscape. However, one of the industry’s most influential voices has suggested that the road ahead may be “inevitably” longer than anticipated. 

In an exclusive interview with AIM, Chris Miller, the author of ‘Chip War: The Fight for the World’s Most Critical Technology’ said that India is in the early stages of building out its chip industry.

While Miller outlined India’s potential, saying, “India is now seeing more investment than ever in semiconductor manufacturing and design”, he also highlighted challenges. According to him, it will be an “inevitably decades-long process”.

Miller stressed the need to foster homegrown companies to create a robust domestic ecosystem. Drawing comparisons to industry leaders like Taiwan and South Korea, he said, “India is not going to go from an initial level to Taiwan’s level overnight…It took countries like Taiwan and Korea decades to build out their chip industry starting in the 1970s.”

Race with China, Taiwan, and the US

India’s vision of becoming a semiconductor powerhouse by 2047 aligns with its broader ‘Viksit Bharat’ mission. 

Miller advocated for strategic planning and consistent investment. “A single plant can take three or four years to build once you start construction, and there’s usually a couple of years of planning beforehand, so this industry is used to thinking in terms of decades,” he said. 

The global semiconductor race is defined by one name: Taiwan Semiconductor Manufacturing Company (TSMC). Producing 99% of the world’s AI accelerators, TSMC has become indispensable in powering the technological advancements of AI-driven industries. 

Regardless, Miller added, “India is arguably one of the world’s top countries in chip design talent, second only to the United States.” 

Chips now represent the largest flow of goods into China. This highlights their strategic importance in the geopolitical tug-of-war between the US and China. 

Every major AI system, from generative models like ChatGPT to advanced data centres, relies on TSMC’s cutting-edge chips. 

The stakes in the semiconductor race are immense. As Miller believes Moore’s Law, which predicts the doubling of transistors every two years, “is changing”, TSMC pioneers alternatives like 3D stacking and advanced packaging. These innovations enable continued improvements in AI chip performance and secure TSMC’s leadership.

While competitors like NVIDIA and Broadcom play vital roles in design, TSMC’s scale and expertise make its dominance clear. 

Leveraging Established Technologies  

On the production front, Miller advised India to start with established technologies rather than diving into cutting-edge innovations like 2-nanometre or 3-nanometre nodes. 

While countries like the US and China forge ahead with advanced 3–5 nm chip production, India finds itself grappling with foundational challenges. Despite renewed efforts and increased budgetary allocations, India’s semiconductor ambitions remain distant. 

Simultaneously, China’s reported breakthrough with 3 nm chips challenges the US-led sanctions that sought to stifle its progress. Facilities in India still grapple at mature nodes like 28 nm and 40 nm.

Encapsulating India’s predicament aptly, semiconductor analyst Arun Mampazhy said, “It’s crucial that we begin rather than engage in an endless debate over the best starting point. India really does not have much of a choice in this.” 

The Indian government has shown a commitment to the sector with the interim Budget’s ₹6,903 crore allocation, which is more than doubling the previous year’s amount. However, experts argue that funding alone cannot bridge decades of technological and infrastructural deficits.

Miller also pointed out that many nations, including European countries and Israel, are following this practical approach. “There’s a lot of innovation happening in older process technologies, especially as they’re being repurposed for applications like AI,” he noted.

Challenges in Chip Talent and Infrastructure  

Miller identified talent and infrastructure as the two major hurdles to India’s ambitions. While India has considerable expertise in chip design, expanding into manufacturing, testing, and packaging requires specialised skills across multiple disciplines. 

“It takes time to build this talent, with different educational backgrounds, internships, and work training,” Miller explained.

Infrastructure development is another critical area, particularly for materials, chemicals, and specialised tools required for semiconductor manufacturing. As India advances into semiconductor manufacturing, it requires the development of specialised infrastructure, including unique chemicals, materials, and tools. 

While Miller expressed optimism about the current pace of progress, he reiterated that building full-scale capacity “will likely take a decade”. “Progress in this area is already visible, with ongoing efforts to establish the necessary infrastructure,” he concluded.

The post India’s Semiconductor Dream a Decades-Long Pursuit, Says Chip War Author appeared first on Analytics India Magazine.

]]>
India’s Semiconductor Talent Pool May Not Be As Skilled, After All https://analyticsindiamag.com/deep-tech/indias-semiconductor-talent-pool-may-not-be-as-skilled-after-all/ Thu, 16 Jan 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10161555

“The talent pool here is extremely limited.”

The post India’s Semiconductor Talent Pool May Not Be As Skilled, After All appeared first on Analytics India Magazine.

]]>

India produces around 1.5 million engineering graduates annually, creating a robust foundation for technological innovation by consistently churning out high-quality talented professionals.

Renowned institutions like the Indian Institutes of Technology (IITs) and the National Institutes of Technology (NITs) are central to this ecosystem. As of now, the country’s semiconductor industry is in a transformative phase, driven by the skilled talent pool. However, things may not be as rosy as they appear. 

Although the country prides itself on the mass quantity and quality of its talent, experts are worried about the gaps in specialised skills that need attention. The talent base needs to be strengthened further to sync with the requirements of global tech giants. 

In an interview with AIM, Ankit Anand, founding partner at Riceberg Ventures, summed up the issue, saying, “The talent pool here is extremely limited even though IITs churn out around 10,000 engineers a year.”

Limited Talent Pool

To start with, there are very few researchers and scientists in India, particularly in advanced fields like molecular biology, quantum computing, and AI. Besides, the number of PhDs and specialised scientists remains alarmingly low, added Anand. 

This disparity is compounded by India’s historical focus on an elitist education model, which emphasises exclusivity and limits the scale of institutions to produce broader pools of talent. 

Adding to this issue is the lack of financial and social incentives for pursuing advanced research. “Even if you do a PhD in India, you won’t get a salary better than a BTech. So, the incentives are not aligned,” Anand emphasised. This, he said, was because there were hardly any companies in the country that needed such talent.

At the recently concluded VLSID Conference 2025, many industry leaders, such as Santhosh Kumar, MD at Texas Instruments, and Hitesh Garg, VP and India MD at NXP Semiconductors, emphasised the heavy reliance on a robust talent pipeline and a thriving startup ecosystem.

Regardless, many others pointed out the opposite—the lack of quality or proficient talent. 

Chris Miller, the acclaimed author of Chip War: The Fight for the World’s Most Critical Technology, further highlighted this. He identified talent and infrastructure as two key challenges in India’s path to progress in the chip space. 

In a conversation with Satya Gupta, president of the VLSI Society of India, Miller called for sustained investment in India’s talent pipeline, including both technical and business talent.

The Indian ‘Brain Drain’

A significant proportion of India’s top talent migrates abroad, attracted by better opportunities and infrastructure. This ‘brain drain’ exacerbates the shortage of highly skilled professionals required to drive deep tech advancements within the country.

With its vast population, India has the potential to produce a significantly larger number of engineers and scientists than smaller countries like Switzerland, Germany or even the United States

These countries thrive despite their smaller local talent pools thanks to their vibrant innovation hubs. Anand highlighted that this is largely because they attract top talent from around the world and have created a quality brand for their country.

The US, for instance, didn’t rely solely on local institutions to build Silicon Valley. Instead, it created a global brand, ‘the American Dream,’ that drew skilled professionals from across the globe. 

Switzerland and Germany also created a monopoly in their local products with the guarantee of ‘Swiss’ or ‘German’ quality. 

India, by contrast, has yet to establish a similar pull. The country lacks the global appeal to attract international talent at scale. “We don’t have the kind of infrastructure to support this,” added Anand. 

Nevertheless, There is Hope

Currently, 20% of the global semiconductor design talent comes from India, with over 35,000 engineers engaged in chip design. “I think India has a large talent base in chip design because it’s developed investment from a whole variety of international firms in India,” Miller told AIM.

Home to institutions like IIT Kanpur and IIT Roorkee, India’s tier 2 and 3 regions are the real reservoirs of untapped potential. Moreover, while local talent may exist, it is often underutilised. 

Sandeep Bharathi, CDO at Marvell Technology, told AIM that the company is capitalising on its skilled workforce to drive innovation and growth and bring it closer to the goal of achieving $1 million in revenue per employee

With a major presence in India, out of Bengaluru, Pune, and Hyderabad, Marvell leverages these tech hubs for their rich talent pools, with Bengaluru leading as a centre for tech innovation. The company has “more than doubled its Indian workforce over the last four years,” demonstrating confidence in the region’s talent.

Bharathi said the company sources its talent directly from universities, ensuring a steady pipeline of early-stage professionals through partnerships and internship programs. These internships are often converted to full-time roles, providing opportunities for young professionals to grow within the company. 

In addition to organic growth, Marvell acquires companies with specialised technologies, gaining access to niche talent that strengthens its workforce in advanced domains.

Even though India offers strong capabilities in digital design, verification, and physical design, Bharathi says there are skill gaps in areas like silicon photonics, electro-optics, and advanced mixed-signal design. 

He stressed the need for academia to align the curriculum with industry needs to address these gaps. The company collaborates with universities to influence course content, bridging the skill gap through improved syllabi and practical training. 

While universities are introducing courses in emerging fields like quantum computing and space tech, practical exposure for students remains an area for improvement.

Govt’s Role in Driving India’s Deep-Tech Ecosystem 

According to Anand, the government’s intent to support innovation is clear. “Two years ago, I could have listed many things the government needed to do, but now, they’ve made significant progress,” he said. 

State governments have been continuously investing in semiconductors, launching initiatives such as the Karnataka government’s approval of ₹3,425.60 crore investment and Andhra Pradesh’s recently signed ₹14,000 crore MoU. 

The central government is also stepping up its policies to foster technologies that will boost India’s semiconductor mission. However, implementing these policies effectively remains a challenge. 

Anand also highlighted the importance of the government becoming a customer of innovative solutions. “If the government becomes a customer, it signals to investors that the startup can successfully sell its product at a scale, and that drives further investment.”

Even private companies are pitching in. On his recent visit to India, Michael Hurlston, CEO of Synaptics, outlined to AIM the company’s plans to double its workforce there over the next three years, increasing from 400 to 800 employees. 

This expansion is driven by India’s rich pool of engineering talent. The existing talent in cities like Bengaluru and Chennai are open to quick upskilling to meet the demands of cutting-edge technology.

Additionally, companies like NXP Semiconductors, Israel’s Tower Semiconductor, Adani Group and Bartronics India Ltd are fostering India’s semiconductor industry with investments and partnerships ranging from $1-10 billion.

The post India’s Semiconductor Talent Pool May Not Be As Skilled, After All appeared first on Analytics India Magazine.

]]>
Marvell Technology Expands Footprint in India, Targets $1 mn Revenue Per Employee https://analyticsindiamag.com/deep-tech/marvell-technology-expands-footprint-in-india-targets-1-mn-revenue-per-employee/ Wed, 15 Jan 2025 04:52:27 +0000 https://analyticsindiamag.com/?p=10161412

Over 70% of the company’s revenue comes from enabling infrastructure for AI applications, CDO Sandeep Bharathi said.

The post Marvell Technology Expands Footprint in India, Targets $1 mn Revenue Per Employee appeared first on Analytics India Magazine.

]]>

Electronics and semiconductors contribute 5.6% of India’s GDP today. As the global demand for AI-driven solutions rises, India is positioning itself as a key player in the semiconductor industry. 

Marvell Technology, an American semiconductor developer, is among the global semiconductor giants tapping into India’s potential. 

In an interview with AIM, Sandeep Bharathi, CDO at Marvell Technology, discussed the company’s focus on advancing data infrastructure from cloud data centres to focus on moving, storing, processing, and securing data. 

How is the Company Performing in India?

Marvell has steadily increased its footprint in India, with three major sites in Bangalore, Pune, and Hyderabad. The company, which employs over 1,500 professionals, collaborates with universities to upskill talent. This reinforces India’s role as a semiconductor hub.

“The way we take a look at it is that Marvell has around 7,000 employees globally. Next year, we’ll hit a million-dollar revenue per employee, which is an important milestone,” Bharathi said. 

Moreover, he highlighted a slowdown in certain sectors, such as enterprise and automation, saying, “The way we look at it is, ‘It’s a diversified product portfolio, and we want to be in businesses where we make money’.”

Position in the Semiconductor Market

In the global semiconductor landscape, Marvell Technology stands alongside giants like NVIDIA, Broadcom, Qualcomm, and AMD in terms of market capitalisation. When asked about its competitive edge, Bharathi attributed it to the company’s reputation as a breakthrough innovator. 

“The market is giving us our size multiple only because we are bringing technologies to the market that people want and care about…That’s the reason you can see the market react.”

One of the most significant challenges in semiconductor development is cost. Advanced process nodes require substantial investment, and development costs for a single chip can range from $100 million to $200 million. 

This makes precision and getting it right at the first chance critical for success. “Missing the market window due to errors could result in significant financial and time losses,” he added. 

Packaging is Key  

Packaging technologies are critical in the semiconductor industry, particularly for creating cost-effective and energy-efficient solutions for data centres. 

With massive infrastructure and power demands, data centres benefit from advancements in 2D, 2.5D, 3D, and 3.5D packaging. These innovations improve chip integration and reduce energy consumption while enhancing performance.  

Another essential element is data movement from chip to chip, board to board, or rack to rack. “Signaling technologies that are necessary to do this are complex, whether it’s data bandwidth is 100G, 200G, 400G,” Bharathi said.

Higher Investments in AI, Efficiency and R&D

Marvell Technology continues to prioritise research and development (R&D) by focusing on advanced geometries like 3nm and 2nm. These cutting-edge technologies, which increase in cost by 30% annually, require significant investment. 

The company has doubled its R&D footprint in recent years, reflecting its commitment to innovation. Over the past five years, Marvell’s revenue has grown from $3.3 billion to a projected doubling by next year. Over 70% of the company’s revenue comes from enabling infrastructure for AI applications, such as hyperscale data centres. 

Additionally, Marvell develops semiconductor solutions for 5G and automotive Ethernet to support connected cars. These solutions include linking sensors, radar, LiDAR, and cameras through Ethernet backbones to enable seamless communication within automotive systems.

As energy demands rise, the company prioritises energy-efficient chip designs to support the sustainability of large-scale data centres. This aligns with broader industry goals to reduce environmental impacts.

Contrary to popular opinion, Bharathi called data localisation an impossible task. “It’s great to talk about, oh, we have to keep the data localised. I honestly don’t think that’s going to happen…The only way is if we relocate an entire city to the middle of nowhere, and you start from scratch.”

What do Other Leaders Have to Say?

During a fireside chat at the VLSID Conference 2025, the panelists highlighted India’s ambitions to grow its semiconductor contribution and accelerate the Viksit Bharat 2047 initiative. 

Santhosh Kumar, MD at Texas Instruments, highlighting India’s potential, said, “By 2047, with a projected GDP of $30 trillion, electronics and semiconductor contribution could grow to 10%, equating to a $3 trillion industry.”

Other industry veterans, including Sanjay Nayak, co-founder of Tejas Networks, and Ganapathy Subramaniam, managing partner at Yali Capital, also emphasised this.

Subramaniam further said that, unfortunately, even though India consumes a lot of electronic gadgets, 5% of the electronics stack, roughly 25% of those electronics are semiconductors. “We contribute nothing as an Indian headquarters.” 

Fireside chat between semiconductor leaders at the VLSID Conference 2025 in Bengaluru.

What is the Goal for India?

The US dominates global semiconductor design, accounting for 75% of the total, followed by Taiwan, Korea, Japan, and Europe. America has the chip design market but not as much of a consumer market. India contributes significantly to talent but lacks domestic intellectual property rights (IPR).

The path to becoming a semiconductor powerhouse involves building robust intellectual property (IP) and fostering a domestic ecosystem for product development. While India hosts a significant portion of global semiconductor design talent, multinational corporations own a large portion of the intellectual property.

The goal is to increase India’s value addition in semiconductor manufacturing from the current 10% to 40–50% by 2047. Achieving this would result in a 100-fold increase in value addition compared to today, which is essential for driving economic growth, creating jobs, and securing a strategic economic position globally.

Partnerships are Essential

A key takeaway is the need to create India-centric innovations for sectors such as mobility, renewable energy, and telecommunications. With the advent of electric vehicles, solar power systems, and next-generation communication technologies, the demand for India-specific semiconductor solutions is surging. 

Nayak said that strengthening India’s semiconductor ecosystem by fostering partnerships with GCCs within India is crucial. 

With India’s vast talent pool and market capacity, there is a unique opportunity to create mutually beneficial partnerships. 

“Japan is an ageing economy with an average age of 50. Huge technologies are available, like OLED display, But they have not been able to compete squarely with Korea and China,” Subramaniam added. 

Two-Wheelers Could be a Big Opportunity

The two-wheeler market is another highly beneficial local market for India. The country’s growing consumption of chips and electronics provides a foundation for this approach. 

“Two-wheeler, automobile, it’s our market. Why do we make controllers, which are four-wheeler controllers, for our two-wheeler market? It’s a large market,” Subramaniam said.

Moreover, the strategic importance of semiconductors in national security and economic growth was underscored. 

India can secure a seat at the global technology table by developing domestic manufacturing capabilities and focusing on partnerships with global players. The conversation also called for nurturing talent in areas like product management and system design to ensure end-to-end capabilities.

The post Marvell Technology Expands Footprint in India, Targets $1 mn Revenue Per Employee appeared first on Analytics India Magazine.

]]>
Robotaxis and Self-Driving Cars to be the ‘First Multi-Trillion Dollar Industry’ https://analyticsindiamag.com/deep-tech/robotaxis-and-self-driving-to-be-the-first-multi-trillion-dollar-industry/ Fri, 10 Jan 2025 08:37:23 +0000 https://analyticsindiamag.com/?p=10161109

Various companies have introduced platforms to streamline the development of AI-powered AV systems.

The post Robotaxis and Self-Driving Cars to be the ‘First Multi-Trillion Dollar Industry’ appeared first on Analytics India Magazine.

]]>

There is no stopping Jensen Huang. After sovereign AI, NVIDIA’s chief has a newfound obsession, mostly revolving around self-driving cars and AI agents. Speaking at CES 2025, he highlighted the potential of robotaxis and autonomous vehicle (AV) technologies to transform global mobility and logistics. 

“I predict that this will likely be the first multi-trillion dollar robotics industry,” he said in his keynote. This claim comes amid a growing wave of innovation, investment, and competition in the autonomous driving landscape.

Robotaxis show promise in reducing traffic congestion, lowering emissions, and providing mobility solutions that are accessible to millions worldwide.

Moreover, the integration of autonomous vehicles into public and private fleets is expected to create millions of new jobs in AI development, system integration, and maintenance, further cementing the industry’s economic impact.

Breakthroughs in AI platforms, sensor technologies, and cloud computing are backing the push for autonomous mobility. Companies like NVIDIA have introduced platforms such as DRIVE Hyperion and Cosmos to streamline the development of AV systems. 

DRIVE Hyperion enables autonomous vehicle manufacturers to handle perception, mapping, and decision-making efficiently, while the recently launched Cosmos platform generates synthetic driving environments for training AV algorithms. 

Using tools like AI traffic generators and neural reconstruction engines, Cosmos creates high-fidelity 4D simulations, turning hundreds of real-world drives into billions of effective miles for training data.

Real vs Synthetic Data Debate

The role of synthetic data in training AV systems has sparked debate among industry experts. Sawyer Merritt, an investor at Tesla, referred to NVIDIA’s Cosmos and pointed out that synthetic data, while innovative, cannot replace the reliability of real-world video driving data. 

“Synthetic driving data is like using ChatGPT—you might trust what you see is true, but you often can’t be entirely certain without further validation,” he said.

Merritt emphasised Tesla’s unmatched advantage: over 7.1 million vehicles on the road worldwide, collectively driving upwards of 75 billion miles annually, with more than 56 million onboard cameras capturing real-world video data.

The company’s cars capture real-world driving scenarios, offering unmatched insights for self-driving. 

In a recent conversation with AIM, Sami Atiya, president of the robotics & discrete automation business area at ABB, also expressed his views on the subject. He believes synthetic data will play a huge role in robotics as the company is already using it for “arm simulations and complex path-planning” for its in-house robots. 

Synthetic data opens up endless possibilities without ever needing to touch the robot. But he also reminded us to be wary of any biases or misleading elements in the data. “The main expertise of the people who actually use these AI systems will become much more crucial to know the right input and output of data that is not biased,” Atiya said.

He agreed with Ilya Sutskever on the end of traditional pre-training due to data limitations, emphasising AI’s reliance on scaling models and exploring agents and synthetic data as the future of AI innovation. “We will reach a plateau, and we are about to see more capacity being thrown at systems,” he said.

An Industry on the Move

Waymo, Alphabet’s self-driving subsidiary, has been conducting extensive real-world trials and is expanding its operations in cities across the United States and in Tokyo early this year. 

Meanwhile, Cruise, a subsidiary of General Motors, has been deploying autonomous taxis in select locations, including San Francisco and Phoenix. At the same time, Mobileye, an Intel company, launched a unique sensor technology for layered visuals, which generates 3D perception for a reliable understanding of the environment. 

Mobileye’s latest system on a chip, the EyeQ6, powers this advanced processing, as announced by founder and CEO Amnon Shashua in his keynote address at CES 2025. The tech, which offers high-resolution sensing capabilities that address camera weak spots, will enter production in 2026. 

NVIDIA also recently announced collaborations that will shape the future of autonomous vehicles. Toyota, the world’s largest automaker, is building its next-generation vehicles on NVIDIA DRIVE AGX Orin, running the safety-certified NVIDIA DriveOS operating system. 

These vehicles will offer functionally safe, advanced driving assistance capabilities. Also, partnerships with companies like Aurora and Continental highlight the widespread adoption of these technologies across legacy automakers. 

Other leaders using Cosmos to build physical AI for AVs include Fortellix, Uber, Waabi and Wayve. Such partnerships aim to overcome challenges in autonomous driving and ensure rapid deployment of robotaxis and self-driving fleets worldwide. 

For instance, HERE Technologies and AWS recently announced a $1 billion partnership to develop AI mapping solutions critical for precise navigation.

Meanwhile, Uber and NVIDIA recently announced a partnership to support the development of AI-powered autonomous driving technology. More details on this are expected later this year. 

Dara Khosrowshahi, CEO of Uber, said in the official announcement, “Generative AI will power the future of mobility, requiring both rich data and very powerful compute.” 

With Uber completing millions of trips every day, Khosrowshahi hopes to create safe and scalable autonomous driving solutions for the industry. Now, major companies look forward to pairing up with the NVIDIA Cosmos platform and NVIDIA DGX Cloud to help build stronger AV partners.

Additionally, Amazon and Qualcomm have also announced a collaboration to revolutionise in-car experiences by combining Qualcomm’s Snapdragon Digital Cockpit platform with Amazon’s AI and cloud services. 

Last month, Volvo Cars CEO Jim Rowan met Qualcomm president & CEO Cristiano Amon for a lap in the eX90 and a chat about the car as the new computing space.

The Landscape in India, Starting in Bengaluru

Bengaluru, known for its traffic and tech innovation, is emerging as a key player in India’s autonomous vehicle landscape. 

The Bengaluru Traffic Police’s exploration of a digital twin for traffic management signals a tech-forward approach crucial for enabling robotaxis and autonomous vehicles. 

Bengaluru’s police commissioner envisions a future where AI boosts enforcement and management, serving as a ‘force multiplier’ to meet the city’s unique needs. However, infrastructure upgrades, better rule adherence, and two-wheeler-focused technologies remain critical.

AI-based cameras reduce violation processing time from 300 seconds to 5 seconds. While enforcement leverages rule-based AI effectively, traffic management faces challenges like unpredictable road behaviour, requiring real-time adaptability and instant decision-making for optimal impact.

The post Robotaxis and Self-Driving Cars to be the ‘First Multi-Trillion Dollar Industry’ appeared first on Analytics India Magazine.

]]>
Chip Pe Charcha Resumes in Bengaluru https://analyticsindiamag.com/deep-tech/chip-pe-charcha-resumes-in-bengaluru/ Mon, 06 Jan 2025 15:30:17 +0000 https://analyticsindiamag.com/?p=10160817

Advising prioritising talent development, author Chris Miller said, “If you’re investing in talent, you’re taking very distributed bets.”

The post Chip Pe Charcha Resumes in Bengaluru appeared first on Analytics India Magazine.

]]>

With the US diversifying its chip strategy and containing China’s dominance by imposing trade restrictions, the geopolitical shift could be pivotal for driving India’s semiconductor aspirations, author Chris Miller said. He is noted for his book ‘Chip War: The Fight for the World’s Most Critical Technology’.

Miller spoke at the ‘Chip Pe Charcha’ event, a semiconductor industry discourse held at the VLSID International Conference 2025 in Bengaluru on Monday. He further highlighted the critical role of AI processors in driving innovation, describing AI as the “killer app of this century”, even as its potential is yet to be explored. 

Geopolitical tensions have fragmented the semiconductor industry, creating separate spheres of influence led by China and the rest of the world and impacting supply chains and market dynamics. Today, not a single country in the world is self-sufficient in semiconductors across the supply chain. Everyone is reliant on somebody else.

Praising India’s focus on fostering design, manufacturing capabilities and strengthening partnerships with global leaders, Miller said, “India’s proactive semiconductor policies and investments position it as a significant player…The broader electronics ecosystem here in Bengaluru and Tamil Nadu is exploding, I think.”

In a previous discussion with Satya Gupta, president of the VLSI Society of India and Epic Foundation CEO, Miller noted that the explosion of AI technologies has driven demand for specialised semiconductors like GPUs while also advancing chip design and manufacturing efficiency through AI-powered innovations.

The global rush to build fabrication facilities may lead to overcapacity in traditional semiconductors, although advanced chips for AI and electric vehicles remain in high demand with persistent shortages.

Global Semiconductor Rivalry

The global semiconductor industry is witnessing fierce competition between the United States and China, with tensions escalating over technology dominance. 

Semiconductor production requires contributions from multiple global players, such as lithography machines from the Netherlands-based semiconductor company ASML, advanced fabrication in Taiwan, and design expertise from the US. 

This reliance on partnerships has led to a deepening technological collaboration between the US and India. In recent years, India has aligned its policies to boost domestic semiconductor manufacturing and integration, making it an attractive partner for global supply chains.

While China races to close the gap in advanced technology, challenges like dependency on imported equipment and a five-year lag behind Taiwan’s Taiwan Semiconductor Manufacturing Company (TSMC) persist. 

Advising prioritising talent development, Miller said, “If you’re investing in talent, you’re taking very distributed bets.”

While acknowledging India’s foundational work in manufacturing and packaging, he encouraged the exploration of niche areas like 3D packaging and compound semiconductors for the energy transition. These, he argued, could offer India a competitive edge without requiring the massive capital investment of traditional manufacturing.

“Just to take an example, the way AI is growing, it will need more computers and more memory. So our best solution is with two virtual engineers, 3D representation,” he added.

India’s Vision

India’s semiconductor sector is poised for transformative growth, as highlighted by Satya Gupta during the VLSID inauguration. With the country’s electronics consumption projected to grow from $200 billion today to $3 trillion by 2047, semiconductors are expected to account for a staggering $810 billion of this market. 

Gupta described this as an opportunity, positioning India to become a major global player. Gupta emphasised initiatives like regional semiconductor chapters across India to decentralise development and foster localised innovation. 

Efforts such as the VSIP internship programme aim to bridge gaps between academia and industry, ensuring students are industry-ready. Additionally, partnerships with global foundries will grant academic institutions access to cutting-edge 55nm nodes, which will enable hands-on experience for aspiring engineers.

Hitesh Garg, VP and India MD at NXP Semiconductors, also emphasised, “The semiconductor industry’s future growth and innovation rely heavily on a robust talent pipeline and a thriving startup ecosystem.”

India’s ambition extends beyond global markets, with untapped opportunities in infrastructure and logistics sectors. Gupta called for leveraging advanced technologies to modernise domestic industries and further boost the country’s economic footprint. 

Energy Challenges and AI

As AI continues to evolve, its energy demands are becoming a critical concern. The event showcased the critical role of VLSI and embedded systems in driving transformative advancements across AI/ML, 5G, internet of things (IoT), quantum computing, and electric vehicles.

With AI applications being tested at both the network core and the edge, experts warn that the technology’s rapid growth could result in a 40-fold increase in usage. Projections suggest that by 2030, AI might consume 20% of the world’s electricity. 

This raises pressing questions about sustainability and the cost of powering AI-driven data centres and devices. Currently, the cost of energy for AI queries is often overlooked. However, as AI use expands, it will give way to significant financial and environmental challenges. 

Miller highlighted this growing crisis during the discussion and said that some economies are already experiencing an increase in electricity consumption directly linked to AI. Despite these concerns, Miller expressed optimism and stressed the history of computing as a model for achieving greater efficiencies. 

He emphasised the importance of continuing this trend by focusing on innovations in hardware, software, and energy-efficient system design. As market demand grows, so will the pressure to find sustainable solutions.

The rise of AI presents both challenges and opportunities. Balancing its energy needs with environmental sustainability will be a key focus for the next decade. This shift demands collaboration across industries and regions to ensure that AI’s growth remains efficient and accessible.

“I think there’s two things that I’m always struck by in talent development. One is the technical talent, and two is the business talent,” said Miller, with a call for sustained investment in India’s talent pipeline, encompassing both technical and business acumen.

The post Chip Pe Charcha Resumes in Bengaluru appeared first on Analytics India Magazine.

]]>
Chip Makers are Betting Big on US https://analyticsindiamag.com/deep-tech/chip-makers-are-betting-big-on-us/ Sat, 04 Jan 2025 05:20:54 +0000 https://analyticsindiamag.com/?p=10160724

The US offers a secure environment compared to regions with geopolitical tensions for a steady and dependable supply of these essential components.

The post Chip Makers are Betting Big on US appeared first on Analytics India Magazine.

]]>

The global semiconductor industry has experienced a significant change over the past few years, driven by disruptions caused by the pandemic, changes in consumption behaviour, and rising geopolitical tensions. As the world increasingly relies on state-of-the-art technologies, securing a stable chip supply has taken centre stage.

The onset of the COVID-19 pandemic five years ago turned the global semiconductor industry upside down. Unlike past economic disruptions, this crisis simultaneously hit both supply and demand. Factories in key hubs like China shut down, leading to chaos amongst chip makers.

The crisis highlighted the risks of overreliance on geographically concentrated supply chains and pushed companies to diversify and localise production. Notably, the US has emerged as a key player by using government incentives and private investments to alter the global landscape of semiconductors.

America’s Chip Comeback

Recognising the need to secure its semiconductor supply, the US government introduced substantial incentives to encourage domestic semiconductor manufacturing in 2022. This was an attempt to lower costs, create more localised jobs, strengthen the supply chain in the US and counter China’s advancements in chip production.

The CHIPS and Science Act, passed in 2022, allocated $52.7 billion to boost US semiconductor research, manufacturing, and workforce development.

The Act also allocated $1.5 billion to advance wireless technologies, support emerging fields like AI and biotech through new National Science Foundation (NSF) initiatives, and fund regional innovation hubs to boost local economies. 

Following the Act, companies announced nearly $50 billion in additional investments in American semiconductor manufacturing, which included a $40 billion investment by Micron in memory chip manufacturing and $4.2 billion through a partnership between Qualcomm and GlobalFoundries. 

In April last year, the US finalised a $6.6 billion subsidy for Taiwan Semiconductor Manufacturing Company (TSMC)’s production facility in Phoenix, Arizona.

Moreover, as part of the CHIPS and Science Act, Samsung Electronics was awarded up to $6.4 billion from the US government to build its new complex in Taylor.

Why Favour America?

Several factors make the US a sought-after hub for semiconductor manufacturing. The country offers a secure environment compared to regions with geopolitical tensions for a steady and dependable supply of these essential components.

Several AI chip companies, including TSMC and Samsung, have been shifting their focus to building manufacturing facilities in the US. TSMC’s Arizona facility has achieved a 4% better yield than the semiconductor giant’s manufacturing sites in Taiwan. 

Notably, states like Arizona and Texas provide ideal climate conditions for chip manufacturing facilities. Access to advanced research institutions, skilled labour, and strong infrastructure supports high-tech manufacturing.

Locating production facilities closer to major clients facilitates better collaboration and reduces logistical complexities. For example, TSMC’s Arizona plant is expected to serve clients such as NVIDIA, Apple and AMD, enhancing operational efficiency. 

Chipping In to Power the AI Revolution

During the COVID-19 pandemic, the demand for semiconductors used in cars, phones, and gadgets fell sharply as people tightened their budgets. At the same time, chips for cloud computing and home internet equipment saw a surge in demand, driven by the rise of remote work and online activities.

The AI revolution has ushered in a new wave of demand for advanced semiconductors. At the core of AI are processors like GPUs, CPUs, and accelerators from NVIDIA, AMD, and Intel, powering models like generative AI. 

Data centres, transformed by companies like Dell and HPE, provide the infrastructure for AI to scale. Meanwhile, while innovators like ASML and TSMC create smaller, faster, and more efficient chips, Qualcomm and ARM’s custom designs extend AI into devices and industries.

Chip production shines with innovators like ASML and TSMC, which create smaller, faster, and more efficient chips. Qualcomm and ARM’s custom designs extend AI into devices and industries. Meanwhile, companies like Micron and Western Digital provide robust data storage solutions, and design tools from Synopsys and Cadence speed up innovation.

These advancements have enabled AI to move from theory to real-world applications. Michael Hurlston, CEO at Synaptics Inc., recently claimed to make the chip design even more efficient and cost-effective by shrinking AI models and optimising software to fit within a minimal memory footprint.

Innovating for a Smarter Future

As per reports, the new fabrication unit in Arizona, which produces 3nm and 4nm chips, will develop Apple A16 chips in the US. Additionally, AMD plans to manufacture its AI high-performance chips (HPC) at this facility in Arizona.

China has also rapidly closed the gap with the US in AI capabilities. As reported recently, it was only six to nine months behind the US. “We were six or seven years behind,” 01.AI chief Kai-Fu Lee had said. Now, however, China seems to have surpassed the US. 

In retrospect, while the government’s benefits may have helped attract the attention of these giants, they could also have adversely affected their supply chains. With the rise in regulations surrounding the export of AI chips to China, Washington plans to also limit semiconductor shipments to some countries accused of supplying to Beijing in order to Close China’s Backdoor Access.

The post Chip Makers are Betting Big on US appeared first on Analytics India Magazine.

]]>
Apple Intelligence Could Have Grave Consequences in India https://analyticsindiamag.com/deep-tech/apple-intelligence-could-have-grave-consequences-in-india/ Thu, 19 Dec 2024 11:01:42 +0000 https://analyticsindiamag.com/?p=10143981

Barely intelligent. Mostly artificial. Sometimes devastating.

The post Apple Intelligence Could Have Grave Consequences in India appeared first on Analytics India Magazine.

]]>

When Apple finally stepped into AI this year, it proudly called it ‘Apple Intelligence’, ditching the ‘artificial’ prefix altogether. That move has aged like milk – at least in the short term. Apple Intelligence’s latest outputs are nothing but artificial and, in one case, have proven to be disastrous. 

A few days ago, the company sent out a bizarre notification on the iPhone regarding a BBC news report about the arrest of Luigi Mangione, the lead suspect in the murder of UnitedHealthcare’s CEO. “Luigi Mangione shoots himself…” read a part of the notification. 

The original post from BBC read: “People who knew him told US media he suffered from a painful back injury and that he had become socially withdrawn in recent months.”

This was followed by: “Mr Thompson, 50, was fatally shot in the back last Wednesday morning outside the Hilton hotel in Midtown Manhattan where UnitedHealthcare, the medical insurance giant he led, was holding an investors’ meeting.”

In the first paragraph, the BBC refers to Mangione, while in the second, the victim is Thompson. Apple Intelligence most likely misinterpreted the context of the news article. Following the fiasco, the BBC complained to Apple about the same. 

AIM reached out to Apple but did not elicit a response.

Apple Shouldn’t Add Fuel to The Fire

Errors like these could have terrible consequences in a country like India, where the problem of fake news is already a huge one. According to the World Economic Forum’s 2024 Global Risks Report, India ranked the highest for the risk of misinformation and disinformation. 

According to the report, which surveyed 1490 experts from academia, business, government, and civil society, ‘AI-generated misinformation and disinformation’ ranked second only to ‘extreme weather conditions’ in the global risk landscape. 

“Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years,” the report warned. 

The emergence of another way to amplify the spread of fake news is the last thing India needs. Misleading headlines and inaccurate summaries of news on sensitive topics could cause unrest in a country like India. 

For example, in 2020, two individuals fell victim to violent lynching fueled by WhatsApp rumours about local thieves. Mistaken identity led to the brutal attack, leaving them injured. 

Moreover, iPhone 16, Apple’s latest offering that supports Apple Intelligence, hit record sales on the opening day in India. The numbers recorded an increase of 15-20% compared to last year. Research from Counterpoint for Q3 24’ showed that Apple occupied a 21.6% market value share, only below Samsung. 

“As consumers increasingly invest in premium smartphones, Apple has cemented its status as the top choice for premium buyers in India, supported by its aspirational image and expanding footprint,” read the report. 

“Experiencing a high-growth phase in India, Apple recorded 34% YoY growth. Q3 2023 also marked the best quarter for Apple’s shipments in the country, which crossed 2.5 million units,” read another report

Coming back to Apple Intelligence, the BBC mishap should not come as a surprise. Misinterpreted, misleading, and bizarre notification summaries in Apple Intelligence have become the internet’s latest meme sensation. 

Notifications Gone Rogue

Apple’s intent with summaries is to scan important details, especially in group chats, to help the user get key information quickly. In a user guide, Apple said, “With its deep understanding of language, Apple Intelligence can help condense the information most important to you.” 

But does it? 

For context, there is an entire subreddit called r/AppleIntelligenceFail, where users share some of the most confusing and out-of-context results derived from Apple Intelligence. 

In one post, a user on Reddit showcased an error where Apple Intelligence mixed up the context between two notifications. 

The first notification showed Alex Greenwood being injured in a football game she was playing and Dejan Kulusevski scoring a goal in the game he was playing. 

But Apple Intelligence mixed them up and said, “Kulusevski and Greenwood were injured in their respective matches.” 

Summaries need some work
byu/googang619 inAppleIntelligenceFail

If there’s one clear thing, it’s that AI models, especially the ones inside iPhones with Apple Intelligence, struggle to understand context, slang, and nuances in conversations. 

This was also a concern when Reddit released the AI Answers feature, which summarises threads based on user input. 

The content on Reddit is mostly riddled with sarcasm, insider jokes, and references that the community enjoys reading. Is it fair then that AI is allowed to meddle with it? 

Moreover, sticking to its privacy promise, Apple has been hell-bent on keeping the AI models local and on devices. And if the AI models need to route their inputs to the cloud, they will do so using their Privacy Cloud Compute (PCC). 

However, Apple says the ‘cornerstone of Apple Intelligence is on-device processing’. But how capable can an on-device model be? Apple hasn’t revealed any details about the parameter size. Still, if what we’ve seen is anything to go by, it is far from capable of understanding the nuances of human conversations. 

Not Many are Interested in Apple Intelligence

A recent survey adds insult to Apple’s injury by suggesting that a majority of iPhone users do not find Apple Intelligence to add any value. The survey, conducted by Sellcell, included 1,000 users who owned iPhones with Apple Intelligence. 

About 73% of the respondents said that they were not satisfied with the AI features and failed to find enough value. However, 47% of iPhone users said that its AI features were ‘somewhat an important deciding factor’ while buying one. 

This is, of course, not what Apple expected, given the standards they have set. Quinn Nelson, a popular YouTuber, reacted to the survey results and said, “Perhaps that’s because Apple Intelligence does very little to nothing in value so far.”

“Had to turn Apple Intelligence off. Just not ready for prime time. Notification summaries were wrong and incoherent. Auto-replies in iMessage were comically mundane,” said Morgan Brown, VP of product & growth at Dropbox, in a post on X.

Clearly, there’s a lot of work to do be done for Apple. It wouldn’t want another smartphone maker to take another bite of that fruit. 

The post Apple Intelligence Could Have Grave Consequences in India appeared first on Analytics India Magazine.

]]>
‘Differentiator of LTIMindtree is AI Adoption, Rather than Another Shiny Toy’ https://analyticsindiamag.com/deep-tech/differentiator-of-ltimindtree-is-ai-adoption-rather-than-another-shiny-toy/ Wed, 18 Dec 2024 05:34:33 +0000 https://analyticsindiamag.com/?p=10143789

LTIMindtree has decided to be transparent and said that there is no point in spending time and money on building AI products when so many have already been built.

The post ‘Differentiator of LTIMindtree is AI Adoption, Rather than Another Shiny Toy’ appeared first on Analytics India Magazine.

]]>

Ask an Indian IT company about how it uses AI, and you will learn nothing. This highlights why such companies remain service-driven rather than product-focused. Many of them also shy away from revealing the revenues they earn from their investments in generative AI.

To get some clarity about why this is the case, AIM spoke with Nachiket Deshpande, COO at LTIMindtree, and he did not hold back.

Commenting on the recent deal with Voicing AI, in which LTIMindtree committed $6 million to build human-like AI voice agents, Deshpande said, “There are so many startups that are coming all around the world. We would want to leverage those startups for those technologies that are coming up.”

Elaborating on the philosophy that LTIMindtree follows when it comes to AI, Deshpande said the company stands on three pillars: AI in everything, everything for AI, and AI for everyone.

He added that AI solutions must integrate seamlessly into users’ existing workflows. For example, if someone uses SAP, ServiceNow, or Outlook daily, the AI solution should meet them there; not as a separate platform. “This is what we refer to as the co-pilot approach, ensuring AI systems are agentic, API-driven, and omnipresent.”

GenAI is Moving Too Quickly

“Over the last two years, there’s been overwhelming noise around generative AI. My challenge was to build a strategy that is simple to adopt, enabling all 86,000 employees at LTIMindtree to align with it,” Deshpande revealed. “More importantly, I wanted a pragmatic approach: something relevant to us and not just general AI industry discourse.”

Unlike other IT companies like Infosys or TCS, which are claiming to be building in-house generative AI solutions, LTIMindtree has decided to be transparent and said that there is no point in spending so much time and money on building AI products when so many have already been built.

“Any technology we develop today risks becoming obsolete in a matter of months, and keeping up would require significant capital investment. The P&L structure of services companies like ours is fundamentally different from product companies,” Deshpande explained.  “They operate with 80% cross-market, and hence they have the ability to continue to do R&D, and we operate at 30-35% cross-market.”

He believes that if a company invests and builds a particular technology like AI, it might soon become irrelevant. “I need to get 86,000 people to reimagine their work with AI, but I only need 2,000 people to build AI solutions,” he said. According to him,  even that number is high and requires a lot of investment.

“The differentiation of LTIMindtree will lie in terms of how we adopt AI, rather than saying I have another shiny toy which is better than somebody else. Because that differentiation is short-lived,” he added.

What is LTIMindtree’s AI Goal?

Deshpande explained that if a company wants to take an AI solution from proof of concept (POC) and roll it out to 10,000 customer service agents, it needs all of this to be built within the organisation in advance.

Following the three pillars of its philosophy, LTIMindtree wants to reimagine every service it offers with AI – to move AI solutions from POC to large-scale deployment. For instance, rolling out solutions to 10,000 customer service agents. This creates a new revenue opportunity for us as customers invest in these capabilities.

Apart from Voicing AI, the IT firm also closed a $240 million deal in the manufacturing sector, where it replaced three competitors by consolidating the client’s application maintenance and running a portfolio with an AI-first approach. Since then, LTIMindtree has announced five to six similar AI-centric wins, with a large portion of the pipeline also reflecting this strategy.

These are just a few of the examples. One of the most successful generative AI applications by LTIMindtree is Canvas, which is a platform specifically designed to take AI-driven ideas into production. Deshpande said that over 13 customers live on the platform, which integrates seamlessly with their other AI investments. 

“I predict every single dollar of revenue will have a generative AI component embedded in it,” he revealed.

Many IT firms like Mphasis and Persistent Systems are increasingly focusing on agentic AI. Larger players like Infosys and TCS are also building multi-agent systems. Deshpande believes this shift is happening because the biggest productivity gains lie in persona-centric AI solutions.

For example, in a team, small productivity improvements for every individual don’t create a monetisable impact. “But if a developer delivers better story point velocity, a support engineer resolves more tickets, or testing cycles shrink from weeks to days, the gains become significant. Agentic systems automate large portions of a persona’s workflow, delivering tangible productivity outcomes,” he added.

When asked about why all Indian IT companies do not reveal the revenue from generative AI, Deshpande said that for LTIMindtree, almost every service has AI embedded in it. “It is not a revenue stream for IT services companies. For NVIDIA, it will be a revenue stream. For Microsoft, it will be a revenue stream.”

For LTIMindtree, however, it will not be a bigger revenue stream. Instead, it will be the key revenue stream.  Considering that the impact of AI is much bigger than any revenue component any of the service companies will give, Deshpande concluded, “Counting AI revenue is not a measure that we would like to go after because I think it misrepresents the impact of AI.

The post ‘Differentiator of LTIMindtree is AI Adoption, Rather than Another Shiny Toy’ appeared first on Analytics India Magazine.

]]>
Why Traditional SaaS is Under Threat  https://analyticsindiamag.com/deep-tech/why-traditional-saas-is-under-threat/ Fri, 13 Dec 2024 13:18:57 +0000 https://analyticsindiamag.com/?p=10143519

All SaaS companies, including Salesforce and Oracle, are stepping up efforts to integrate AI solutions into their offerings.

The post Why Traditional SaaS is Under Threat  appeared first on Analytics India Magazine.

]]>

The future of SaaS looks bleak. Recently, Klarna CEO Sebastian Siemiatkowski announced that the company will end its service provider relationships with Salesforce and Workday as part of a major internal overhaul driven by AI initiatives.

“This news from Klarna should have every enterprise SaaS company shaking in their boots. If an internal team using AI can replicate over 20 years of work and customisation from Salesforce and Workday, to the extent the company doesn’t feel the need to pay for these tools anymore, everything we know about the stickiness and durability of enterprise software needs to be rethought in the light of AI,” Gokul Rajaram, former Coinbase board member and early stage startup investor, said.

Klarna is not alone. In an exclusive interview with AIM, Harneet SN, founder of Rabbitt AI, revealed that he is partnering with the University of California, Berkeley, to create AI-powered tutors for their online learning programs. He said that the university has opted to move away from its previous SaaS providers in favour of building in-house solutions.

Harneet cited an example of another company that previously relied on voice bots and chatbots from Yellow.ai. However, as they seek to integrate generative AI capabilities into their voice bots, they are now opting to develop their own chatbots. “Yellow.ai’s SaaS-based chatbots are not working well for them in the GenAI era,” he added.

Many feel that with the advent of GenAI coding tools like GitHub Copilot and Anthropic’s Claude, one can expect software development to become cheaper and the job market for coders to evolve, creating a more accessible environment for talent, although at lower price points.  

“You can rebuild most enterprise SaaS functionality, host for super cheap, and get basically [over] 90% functionality,” said Akber Khan, founder of Evolve Machine Learners. His company is currently building in-house SaaS solutions for large enterprises.  

Building on this idea, a user posted on X, “My general belief is that the next decade is going to be B2C, not B2B. Generally, AGI/AI will increase people’s capabilities enough [so] that B2B SaaS won’t be a good purchase. You can just build it in-house.”

SaaS companies are not alone. Indian IT giants such as TCS, Infosys, Wipro, HCLTech, and others, which are arguably still testing the waters of GenAI, could soon face a critical challenge when spending on their services becomes redundant. With AI tools, anyone within enterprises may be able to build front-end, web, and other applications with minimal coding.

AI Coding Tools Are All You Need

“Generative AI can now take on substantial workloads in software development, particularly in bug fixing, vulnerability remediation, and even optimising code quality,” said Asankhaya Sharma, founder of Patched AI. He added that Patched’s work with LLMs has shown significant progress in reducing the developer workload on security fixes by automating patch generation for code vulnerabilities​​. 

He explained that developing in-house AI involves higher initial costs due to infrastructure, talent acquisition, and maintenance expenses. However, in the long run, SaaS costs can add up as subscription fees scale with use.

Similarly, Harneet said that SaaS is comparable to an EMI rather than a one-time payment. He said, “While the upfront cost of in-house solutions may be 10-15 times higher over two to three years, the ROI (return on investment) will ultimately be positive.”

“The reality is that there’s no universal solution. The choice between in-house AI development and SaaS adoption depends heavily on an organisation’s specific situation, goals, and resources,” said Pradeep Sanyal, AI and data leader at a global tech consulting company. 

Sanyal added that, when it comes to software development, AI can handle significant advancements in the workload. He pointed out that AI excels at automating repetitive tasks, suggesting code snippets, and assisting with basic debugging. 

“In-house AI solutions can be scalable if they are well-architected. However, many AI solutions fail in production because they were initially built as proof-of-concept projects that didn’t account for the demands of production-scale environments. SaaS offerings, designed for enterprise requirements, may offer better scalability in many cases as they are thoroughly tested across various workloads,” said Pavan Nanjundaiah, head of Tredence’s studio innovation team.

Rise of Vertical SaaS

Harneet said AI has the capability to solve very niche problems. “The companies that focus on niche problems and solving them…will be really successful,” he said.

Along similar lines, A16z recently published a blog which said AI is unlocking a new era for vertical SaaS. In functions like marketing, sales, customer service, and finance, AI will augment, automate, or, in some cases, replace many of the rote tasks currently performed by people, allowing VSaaS companies to offer even more with their software. 

“The early winners in LLM-based solutions might just be general-purpose platforms. Over time, vertical AI agents will emerge. It’s like how, in the box software world, the early vendors were just trying to convince people to use software…As the market matures, it will get more sophisticated, and vertical solutions will become dominant players,” said Jared Friedman, group partner at Y Combinator, in a recent podcast with YC president Gary Tan while discussing the future of SaaS. 

Harneet believes that in the era of AI, SaaS will become a sidekick. “It will be an enabler, but SaaS is not the main focus. In the internet world, SaaS was a game because the network was king, but in the AI era, the solution is king.”

Notably, all SaaS companies, including Salesforce and Oracle, are stepping up efforts to integrate AI solutions into their offerings. At Cloudworld 2024, Oracle announced that over 50 AI agents within its Fusion Cloud Applications Suite would automate a number of tasks that will help streamline business processes, deliver personalised insights, and boost productivity across various functions, including finance, supply chain, HR, and sales, among other tasks. 

Similarly, at Salesforce’s biggest tech summit, Dreamforce 2024, the company unveiled Agentforce Partner Network, which brings together tech giants such as AWS, Google Cloud, IBM, and Workday to enhance the AI-powered Agentforce platform’s capabilities.

It would be naive to declare SaaS dead. Rather, existing SaaS companies are evolving into AI-first entities. “Legacy and new SaaS companies will truly become AI-first (not just marketing), abstract away the complexity of deploying LLMs,” said Matt Turck, VC at First Mark Cap, adding that Artificial Intelligence as a Service (AIaaS) has become the new SaaS.

SaaS companies such as Zoho, Freshworks, CleverTap, and Atlassian have added GenAI capabilities to their existing solutions. Meanwhile, numerous AI startups are developing new products based on these generative AI technologies. According to AIM, approximately 60 AI startups in India are building products using generative AI.

AIM got in touch with Priya Subramani, VP and GM of customer experience products at Freshworks, who said, “My view is that you should use your technology where it’s core to your business. For anything else, like customer service or other functions, you can adopt best practices from industry leaders.”

On the other hand, NVIDIA chief Jensen Huang believes that SaaS is sitting on a goldmine. “These platforms are sitting on a gold mine. There’s going to be a flourishing of agents specialised in Salesforce, SAP, and other platforms.” 

Every AI startup that is not engaged in core research effectively becomes a SaaS company. For instance, several companies are developing AI-powered conversational platforms for customer care, such as Exotel, Freshworks, Gupshup, CoRover.ai, LimeChat and Yellow.ai.  

However, not everyone believes the same. “Most SaaS companies rebranding themselves with AI are barely adding a limited chat functionality where users can ask a limited set of predefined questions and get directed to the dashboard for answers. There’s no AI here, just a bad UX that delivers little to no value to the end customer,” said Divyaanshu Makkar, co-founder of WizCommerce. 

Companies like Ema and Alchemyst AI are working on developing  ‘AI employees’ for businesses. In a recent blog titled ‘Why SaaS is Dead and the Future is Agentic’, Ema explains how Agentic AI overcomes the limitations of traditional SaaS. These AI agents can manage entire tasks on their own, make decisions, plan, and improve their performance based on feedback, similar to how humans learn and adapt over time.

Hybrid Approach 

According to Sanyal, while the decision to choose between in-house AI and SaaS is complex, most enterprises are gravitating toward a hybrid model. “Companies are using SaaS to initiate quickly and fill gaps while building in-house capabilities for their most critical, differentiating AI needs. This pragmatic approach balances speed, cost, control, and long-term strategic value,” he said.

Echoing similar sentiments, Harneet said that SaaS is a proven model that will not disappear completely and will remain relevant for certain use cases. However, he added that enterprises focused on building core generative AI applications will prefer to develop their own solutions.

“B2B SaaS companies, which are taking a lot of data from businesses, will have a gloomy future,” he claimed.

Building in-house AI solutions also comes with its own challenges. “Building an in-house AI stack can take several weeks to a few months. Initial development may take anywhere from two to four weeks for a basic model, while fine-tuning, testing, and full deployment could add another six to 12 weeks. If rapid deployment is essential, a SaaS solution can be a faster option,” Sharma added.

Harneet explained that not every solution needs to be built in-house. “There are certain areas that enterprises or individuals should not attempt to build in-house, especially those that are not their core expertise.”

“For example, if I am not a finance company but an AI company, I might need a tool to automate the reconciliation of my finances. In that case, I can probably outsource that tool,” he concluded. 

The post Why Traditional SaaS is Under Threat  appeared first on Analytics India Magazine.

]]>
An AI Agent Now Costs More Than a Junior Developer In India https://analyticsindiamag.com/deep-tech/an-ai-agent-now-costs-more-than-a-junior-developer-in-india/ Thu, 12 Dec 2024 08:22:15 +0000 https://analyticsindiamag.com/?p=10143409

With a $500 monthly subscription price, will it fulfil the promise? 

The post An AI Agent Now Costs More Than a Junior Developer In India appeared first on Analytics India Magazine.

]]>

The ‘I’ in India stands for IT services. Last year, over 2,50,000 youths were hired in the Indian IT sector fresh out of college. This year, over 1 lakh freshers received offer letters. While there is a decline in the number of jobs, the curve is set to grow in the coming months. 

Let’s look into the moolah. On average, a fresher is paid $4,000 per year (approximately ₹3.5 lakh annually). 

Enter Devin, an end-to-end AI-enabled software development tool built by Cognition Labs that aims to replace junior programmers or developers in organisations. What does it cost a company? It is priced at $500 a month, which amounts to $6,000 annually. 

Not only does Cognition Labs plan on emulating the role of a junior developer, but it also costs more than an actual one from India – the IT service capital of the world.

After setting the AI ecosystem on fire, Devin went missing in action for a few months. It was laden with false and misleading promises, and tools like Cursor, GitHub Copilot, and Windsurf started making their mark even before Devin was launched. Devin’s time is finally here, but is it ready to fulfil its promise? 

A Junior Dev(in) Everyone’s Desk

While the $500 monthly price tag might seem steep, Cognition Labs puts no limit on the number of seats when using the tool within a team. While positioned as an all-purpose tool, Devin recommends using the tool for front-end bugs, creating first-draft pull requests (PRs) for backlogs, and making targeted code refractors. 

Apart from the primary chat interface, Devin is also available as a Slack integration, wherein it begins to work on an issue when users tag it in a thread. Users can also start a Devin session directly from VS Code. 

Internally, Devin is said to be impactful in Cognition Labs’ tech stack. It became the number one contributor in many of its internal tools and front-end repositories last month. The company also showed off how Devin monitored the stats from Devin’s launch, and they asked the tool to assemble the data in a .csv file and monitor it for twelve hours. 

Cognition Labs also revealed that Devin was able to solve, test, and fix an issue in Anthropic’s model context protocol (MCP). We’re witnessing a case where one AI agent is fixing another. 

It isn’t all about the owner’s pride; other organisations, too, were able to augment their workflows and project timelines. 

Sam Purtill, VP of engineering at Advantage Solutions, wrote in a post on X, “One of our engineers gave Devin a bug, took their son to a piano lesson, came home to a perfect fix. Total time spent: minutes.”

Moreover, Devin may also help relieve employees from time-intensive and repetitive tasks. Karim Atiyeh, founder and CTO at Ramp, said Devin was “instrumental” in helping them clean up dead code and speed up their tests, thereby giving their engineers more time to focus on more important matters. 

Nubank, a neobank based out of Brazil, said that Devin was able to cut the timeline of a project short from 1.5 years to just two months. 

“Devin successfully delivered 12x efficiency improvement on engineering time, helping reduce the developer toil for Nubank engineers as they’ve scaled to over 110 million customers,” read a post by Cognition Labs on X

In another instance, Devin was also used at Dagger.io to solve the problem of less important issues that were unnoticed inside teams. 

“Three months later, nobody had time to even look. Until Devin arrives and opens a PR within minutes,” said Solomon Hykes, co-founder of Dagger. He added that anyone running an open-source project simply can’t afford not to take a shot at Devin. 

Devin Is STILL a Junior Engineer

There are two ways to look at Devin. It is as good as a junior developer, or it is only as good as a junior developer. 

As impressive as it seems, its capabilities are still limited. 

While Devin does the magic, there might not be a lot of opportunities to supervise or add your input during the process. Will Brown, an AI researcher at Morgan Stanley, said on X that it may get harder to help it out when it is stuck. 

In scenarios like these, using a tool like Cursor or Windsurf, where the user is still in the driving seat, is beneficial. 

“I think I slightly prefer the ‘pair programming’ workflow of Cursor agent, which is way more hands-on, but you’re reviewing the code in real-time, plus [you] can give suggestions more easily,” he said. 

Moreover, Devin still requires a few back-and-forths, which users may not want to do if they’re paying for an agent that claims to have autonomous capabilities. 

A user on X observed the chat sessions from the PRs and pointed out the same, saying, “3/5 Devin PRs look unimpressive, [and] can be done an order of magnitude faster [by] prompting yourself or just typing the code.” 

In a detailed review of Devin, Steve Sewell, CEO of Builder.io, said that he had to wait fifteen minutes for a PR and then have quite a few back-and-forths on Slack.

“I much prefer Cursor’s workflow, where I have all of this right in my local environment and IDE (integrated development environment),” he added. Moreover, users wouldn’t want Devin to go off on its own unless it provides a good sense of trust and confidence that it will be able to accomplish the task. 

So, the same concerns persist with hiring an intern, a junior developer, or Devin. All of these can ship a feature, but if the code base is devoid of industry-standard practices and ease of interpretation, it spells trouble. 

For instance, Devin was asked to make changes to a CSS code, but it added a few components that were unrelated and unnecessary. 

Not being able to understand such changes and a pile of AI-written code is a crucial problem. “AI always gives you an answer, and the answer is not just wrong, but also very hard to detect what is wrong about it. It just invents stuff,” said Mark Essien, CEO of Hotels.ng. 

Cory Zue, the founder of SaaS Pegasus, narrated a similar experience with a few interns who joined and shipped a feature but left a ‘mountain of code’ no one understood. 

“My own gut feel is that this is definitely happening, and many projects will die under the weight of bad AI code that no one understands,” he added. 

All things considered, will Indian IT embrace a tool like Devin inside their organisation? It seems unlikely. While Indian IT is going all in on generative AI projects and engagements, they’re not a fan of using these tools in the workplace for now.

Earlier this year, Mrinal Rai, assistant director and principal analyst at ISG, speaking to AIM about tools like Claude and Cursor, said, “Many of these [GenAI] solutions fail to impress clients. Indian IT service providers have long-standing relationships with enterprises and have experience in the specific nuances in a large or medium-sized business requirement.”

That said, if Devin can truly transform software development, $500 seems like a small price to pay. 

The post An AI Agent Now Costs More Than a Junior Developer In India appeared first on Analytics India Magazine.

]]>
Starlark is Basically Python, But Not Really Python, and That’s Fine https://analyticsindiamag.com/deep-tech/starlark-is-basically-python-but-not-really-python-and-thats-fine/ Thu, 12 Dec 2024 06:34:42 +0000 https://analyticsindiamag.com/?p=10143388

The Starlark project seems like writing left-handed if you’re right-handed.

The post Starlark is Basically Python, But Not Really Python, and That’s Fine appeared first on Analytics India Magazine.

]]>

Starlark, a hermetic subset of Python, has gained prominence in the realm of build systems like Bazel, something that Google uses to build all of its products. The unique position of being  ‘Python-like yet not Python’ has always been an interesting topic of debate among developers. 

On one hand, Starlark offers the familiarity of Python syntax, ensuring ease of adoption. On the other, its limitations, both by design and due to minimalism, make it a polarising choice. 

At its core, Starlark is a lightweight programming language designed to be embedded within applications, offering configuration or scripting functionalities. It excels in these tasks because it simplifies scripting for build systems.

The Goodness of Python?

According to the creators of Starlark, due to its Python-like nature, it is dynamically typed, supports high-level data structures, and includes first-class functions with lexical scoping and garbage collection. With its compact syntax and readability, Starlark is ideal for defining structured data, creating reusable functions, or integrating scripting functionality into applications.

But the very reason it is praised is also the reason why people don’t like it – Python. Though it reduces the learning curve for programmers who are using Python, it eliminates the safety issues associated with Python, which are a lot. 

Haoyi Li, staff software engineer at Databricks, reflecting on his seven years of experience with Starlark in Bazel, discussed the pros and cons of using Starlark on Hacker News.

“Having a ‘hermetic’ subset of Python is nice. You can be sure your Bazel Starlark codebase isn’t making network calls, reading files, or shelling out subprocesses,” he said. This hermetic nature enforces reproducibility, determinism, and enables optimisations like parallelisation and caching. Developers familiar with Python syntax find comfort in its simplicity compared to other tools, such as templated Bash or YAML scripts.

When it comes to internal tools development, Starlark shines. Ajay Kidave, who is building Clace, a platform using Go and Starlark, highlighted that though Starlark has been great for avoiding Python’s dependency management challenges with easily extensible plugin APIs, it does not support usual error handling features.

“There are no exceptions and no multi-value return for error values. All errors result in an abort,” Kidave added. “This works for a configuration language where a fail-fast behaviour is good. But for a general purpose script, you need more fine-grained error handling.”

Meanwhile, several others discuss on a Reddit thread that being similar to Python is not exactly an issue since Starlark is an embedded language. “The code just won’t work, that’s all. It’s not even statically typed, and you’re not supposed to write edifices of engineering in it.”

When Starlark was launched, a lot of companies started adopting it to build dozens of products, which shows that there was a gap within the Python ecosystem that it could fill. 

Limited, Yet Complex

While Starlark’s minimalism offers safety, it can also be a curse. Haoyi Li also pointed out the pitfalls of the language. “A large Starlark codebase is a large Python codebase. Large Python codebases are imperative, untyped, and messy even without Starlark’s constraints.”

The absence of modern Python features like PEP 484 type annotations makes managing complexity difficult. Without type support, IDE assistance is minimal, leading to what he calls “spaghetti code” in large Starlark projects. Starlark’s design philosophy reflects a broader tension in build systems. 

Should scripting languages prioritise simplicity and safety or embrace complexity to accommodate genuine needs? This has been a discussion for a long time.

“I think the lesson of Starlark is again confirmation that purely declarative languages are too restrictive. You need some ‘smart’ code by users. Even if it’s only in 5% of the code, that 5% is very useful and load-bearing,” said a developer, explaining that Starlark is essentially a nice mix of both paradigms.

Rochus Keller, who developed BUSY, which is a lean and statically typed cross-platform build system for GCC, CLANG, and MSVC, criticised the lack of modularisation and static typing in many build systems. “The achievements of software engineering, like modularisation and type checking, seem to have had little influence on build systems,” Keller said.

Meanwhile, Gradle, a JVM build system, opts for complexity by using Kotlin—a strongly typed, general-purpose language. As Mike Hearn, the creator of Hydraulic Conveyor, pointed out, Gradle leans into complexity and treats building systems as special programs. This makes the resulting mess somewhat optimisable and tractable.

Python’s flexibility comes with serious trade-offs, making programs difficult to reason about. The same goes for Starlark, which is constantly debated amongst all the programmers using different programming languages.

The post Starlark is Basically Python, But Not Really Python, and That’s Fine appeared first on Analytics India Magazine.

]]>
The Cost of Artificial Super Intelligence is Now $200 https://analyticsindiamag.com/deep-tech/the-cost-of-artificial-super-intelligence-is-now-200/ Wed, 11 Dec 2024 11:16:53 +0000 https://analyticsindiamag.com/?p=10143303

The cost of artificial intelligence is now $20, and if OpenAI achieves AGI in 2025, it could possibly be priced at $42. 

The post The Cost of Artificial Super Intelligence is Now $200 appeared first on Analytics India Magazine.

]]>

OpenAI’s ChatGPT Pro is sparking debates on its true value. As AI becomes more integrated into our lives, the idea of ‘intelligence too cheap to meter’ feels closer than ever. But for now, as one former OpenAI exec, now in Google, Logan Kilpatrick, said: “consumer willingness to pay for AI is going to go up (a lot).” 

The next few years will reveal whether this price point is a disruption or just the start of a bigger revolution. For instance, the o1 Pro model, bundled with the $200 ChatGPT Pro subscription, has raised questions about whether the cost is truly justified. 

However, the bitter reality is that—o1 is not for everyone

“A small percentage of users want to use ChatGPT a TON and hit rate limits and want to pay more for more intelligence on really hard problems. The $200/month tier is good for them!” said OpenAI chief Sam Altman, clarifying that most users will be best served by the free tier or the $20/month tier. 

OpenAI believes that its o1 Pro mode delivers more reliable and comprehensive responses, particularly in fields like data science, programming, and case law analysis. It also outperforms both o1 and o1-preview on challenging ML benchmarks across math, science, and coding.

Unlocks Scientific Breakthroughs  

The model is expected to play a key role in sectors like health sciences, assisting in the search for cures for rare diseases. For instance, Derya Unutmaz, a professor at The Jackson Laboratory, revealed in a post on X that he is using o1 pro for a cancer therapy project.

Unutmaz said that o1 Pro helped him propose a groundbreaking idea to simulate T-cell exhaustion in a way that takes into account the time-based progression of the process. He said the model, instead of just looking at a single moment, proposed introducing different stress factors to the tumour at different points in time, giving an analogy of the Battle Royale game, which challenges players to adapt to escalating obstacles. 

“To be clear, o1 Pro is a totally different beast, requiring different prompting approaches and being useful in different ways from GPT-4 or Sonnet,” Matt Shumer, CEO of HyperWrite AI, said on X.

He explained that he first tests whether Sonnet can handle complex coding tasks. If Sonnet struggles or fails, he switches to o1 Pro. He described o1 Pro as a more powerful tool, capable of providing solutions when Sonnet fails, especially in more complex or difficult scenarios. “There were a few really cool instances tonight where Sonnet was struggling heavily, and o1 Pro one-shotted a solution,” he said.

Deedy Das, VC at Menlo Ventures, used o1 Pro to solve NYT Connections — a simple game where you group 16 words into four groups of four. “o1 Pro solves it consistently in one shot,” he wrote, on X. 

OpenAI researcher Noam Brown, in a recent podcast, said that he uses o1 Pro for complicated coding tasks. “If I have something that’s pretty easy, I’ll give it to GPT-4o, but if I have something that I know is really hard or that I need to write a lot of code for, I’ll just give it to o1 and have it do the whole thing on its own,” said Brown. 

He said an interesting thing about o1 is that it’s a real proof of concept — users can give it a difficult problem, and it can figure out the intermediate steps on its own and how to tackle those steps. However, he added that he doesn’t really know how people will use o1 until it is deployed in the wild.

Unlike previous models where the chain of thought was primarily a prompting technique, o1 has been trained with reinforcement learning to apply a chain of thought reasoning without additional prompting.  Moreover, OpenAI recently introduced reinforcement fine-tuning (RFT), allowing organisations to create expert-level AI for tasks in law, healthcare, finance, and more. This method enables training with minimal data, sometimes as few as 12 examples.

Unleashes Creativity to a Whole New Level 

OpenAI launched its highly anticipated video-generation model, Sora, which, according to the company, is a calculated step toward achieving AGI one frame at a time. 

With an OpenAI Plus account, users get 50 Sora generations per month, and with a Pro account, users get 500 fast generations (or fewer at high resolution) and unlimited generations in a slower mode. This makes the $200 price tag worthwhile. 

Altman describes the new video-generation model as the ‘GPT-1 for video’ and is eager to see how users will collaborate with each other. “One of the most exciting things to me about this product is how easy it is to co-create with others; it feels like an interesting new thing! This is early – think of it like GPT-1 for video –but I already think the feed is so compelling,” he said.

Sora uses credits for video generation, with costs varying based on video resolution and duration. For example, a 5-second 480p square video costs 20 credits, while the same length in 1080p costs 200 credits. Longer videos incur higher credit charges. Features like Re-cut, Remix, Blend, and Loop also consume credits based on video length. ChatGPT Pro users can generate relaxed videos without using credits.

ChatGPT Plus offers 1,000 credits for up to 50 priority videos (720p, 5s max), while ChatGPT Pro offers 10,000 credits for up to 500 priority videos (1080p, 20s max) and unlimited relaxed videos. 

Back in 2023, when OpenAI launched ChatGPT Professional, it was priced at $42. Notably, a possible explanation for the high price is OpenAI’s anticipated $5 million loss this year.

Is it Worth Buying? 

Many quixotic intellectuals believe the price of ChatGPT Pro is justifiable – $200 is the price of unleashing creative freedom and unlocking scientific discoveries,  among other things.

Ethan Mollick, associate professor at The Wharton School, who got early access to o1, shared his experience and compared it to Claude Sonnet 3.5 and Gemini. “It can solve some PhD-level problems and has clear applications in science, finance, and other high-value fields. Discovering uses will require real R&D efforts,” he said.

He further explained that while o1 outperforms Sonnet in solving specific hard problems that Sonnet struggles with, it doesn’t surpass Sonnet in every area. Sonnet remains stronger in other domains. “o1 is not better as a writer, but is often capable of developing complex plots better than Sonnet because it can plan ahead better,” he said.

A Reddit user shared their experience after spending eight hours testing OpenAI’s o1 Pro ($200) against Claude Sonnet 3.5 ($20) in real-world applications. For complex reasoning, the o1 Pro was the winner, providing slightly better results but taking 20-30 seconds longer per response. 

Claude Sonnet 3.5, while faster, achieved 90% accuracy on these tasks. In code generation, Claude Sonnet 3.5 outperformed o1 Pro, producing cleaner, more maintainable code with better documentation, whereas o1 Pro tended to overengineer solutions.

For advanced mathematics, o1 Pro excelled at PhD-level problems, but Claude Sonnet 3.5 handled 95% of practical math tasks perfectly. In vision analysis, o1 Pro stood out with detailed image interpretation, a capability that Claude Sonnet 3.5 currently lacks. When it came to scientific reasoning, the result was a tie – o1 pro offered deeper analysis, while Claude Sonnet 3.5 provided clearer explanations.

However, Abacus AI chief Bindu Reddy, in her internal tests, pointed out that o1 lags behind Sonnet and Gemini in coding. “We ran the entire live bench AI coding questions by hand and evaluated o1 for coding. The result is that o1 is good but not as good at coding as Gemini or Anthropic. However, it is an improvement over the o1-preview in this category,” she said.

With eight days still left to go of ‘12 Days of OpenAI’ shipmas, ChatGPT Pro users can expect more delights, possibly including advanced voice mode with vision and more.  

The post The Cost of Artificial Super Intelligence is Now $200 appeared first on Analytics India Magazine.

]]>
Fine-Tuning is Dead, Long Live Reinforcement Fine-Tuning https://analyticsindiamag.com/deep-tech/fine-tuning-is-dead-long-live-reinforcement-fine-tuning/ Sat, 07 Dec 2024 13:13:47 +0000 https://analyticsindiamag.com/?p=10142661

‘This is not standard fine-tuning... it leverages reinforcement learning algorithms that took us from advanced high school level to expert PhD level’

The post Fine-Tuning is Dead, Long Live Reinforcement Fine-Tuning appeared first on Analytics India Magazine.

]]>

OpenAI has shattered the boundaries of AI customisation with the debut of reinforcement fine-tuning (RFT) for its o1 models on the second day of its ‘12 Days of OpenAI’ livestream series. This new breakthrough marks the end of traditional fine-tuning as we know it. With RFT, models don’t just replicate—they reason. 

By employing reinforcement learning, OpenAI looks to empower organisations to build expert-level AI for complex tasks in law, healthcare, finance, and beyond. This new approach enables organisations to train models using reinforcement learning to handle domain-specific tasks with minimal data, sometimes as few as 12 examples. 

By using reference answers to evaluate and refine model outputs, RFT improves reasoning and accuracy in expert-level tasks. OpenAI demonstrated this technique by fine-tuning the o1-mini model, allowing it to predict genetic diseases more accurately than its previous version.

Redefining Model Fine-Tuning

Unlike traditional fine-tuning, RFT focuses on teaching models to think and reason through problems, as Mark Chen, OpenAI’s head of research, explained: “This is not standard fine-tuning… it leverages reinforcement learning algorithms that took us from advanced high school level to expert PhD level.” 

Limitations: the approach is not without limitations. OpenAI engineer John Allard explained that RFT excels in tasks where outcomes are “objectively correct and widely agreed upon,” but may struggle in subjective domains or creative applications where consensus is harder to define.

However, reinforcement fine-tuning (RFT) is generally considered more computationally efficient compared to traditional full fine-tuning. Critics also note that RFT’s performance depends heavily on task design and the quality of the training data.

Interestingly, with RFT, you can achieve significant performance improvements with just a few dozen examples because the model learns from feedback rather than needing to see all possible scenarios.

Early adopters, including Berkeley Lab researchers, have already achieved remarkable results. For example, a fine-tuned o1-mini model outperformed its base version in identifying genetic mutations that cause rare diseases.

OpenAI has opened its RFT alpha program to select organisations. Participating teams will gain access to OpenAI’s infrastructure to train models optimised for their unique needs. “Developers can now leverage the same tools we use internally to build domain-specific expert models,” said Allard. 

Justin Reese, a computational biologist, highlighted RFT’s transformative potential in healthcare, particularly for rare diseases affecting millions. “The ability to combine domain expertise with systematic reasoning over biomedical data is game-changing,” he said.

Similarly, OpenAI’s partnership with Thomson Reuters has already demonstrated success in fine-tuning legal models, paving the way for enhanced AI applications in high-stakes fields like law and insurance.

A New Era of AI Customisation 

With a public release planned for 2025, OpenAI aims to refine RFT based on feedback from early participants. Beyond its initial applications, OpenAI envisions RFT models advancing fields like mathematics, research, and agent-based decision-making. “This is about creating hyper-specialised tools for humanity’s most complex challenges,” said Chen.

Simply put, this technique transforms OpenAI’s o1 series of models into domain-specific experts, enabling them to reason with unparalleled accuracy and outperform their base versions on complex, high-stakes tasks.

Regular fine-tuning typically involves training a pre-trained model on a new dataset with supervised learning, where the model adjusts its parameters based on the exact outputs or labels provided in the dataset. 

On the other hand, RFT uses reinforcement learning where the model learns from feedback on its performance, not just from direct examples. 

Instead of learning from fixed labels, the model gets scored based on how well it performs on tasks according to a predefined rubric or grader. This allows the model to explore different solutions and learn from the outcomes, focusing on improving reasoning capabilities.

ChatGPT o1 Pro Feels like Buying a Lambo

On the first day of 12 Days of OpenAI, the company released the full version of o1 and a new $200 ChatGPT Pro model.  The ChatGPT Pro plan includes all the features of the Plus plan and access to the additional o1 Pro mode, which is said to use ‘more compute for the best answers to the hardest questions’. Furthermore, the plan is set to offer unlimited access to o1, o1-mini and GPT-4o along with the advanced voice mode.

OpenAI also announced new developer-centric features to the model. These include structured outputs, function calling, developer messages, and API image understanding. OpenAI also said they’re working on bringing API support to the o1 model. 

“for extra clarity: o1 is available in our plus tier, for $20/month. with the new pro tier ($200/month), it can think even harder for the hardest problems. most users will be very happy with o1 in the plus tier!,” posted OpenAI chief Sam Altman on X. 

Many in the community feel that $200 is too much for a ChatGPT Pro subscription. “Don’t think I need o1 Pro for $200/month. o1 is enough for me. Heck, just GPT-4o is enough for me,” posted a user on X.

“ChatGPT o1 Pro feels like buying a Lambo. Are you in?” posted another user. 

Ethan Mollick, associate professor at The Wharton School, who has early access to o1, shared his experience and compared it to Claude Sonnet 3.5 and Gemini. “It can solve some PhD-level problems and has clear applications in science, finance, and other high-value fields. Discovering uses will require real R&D efforts,” he said.

He explained that while o1 outperforms Sonnet in solving specific hard problems that Sonnet struggles with, it doesn’t surpass Sonnet in every area. Sonnet remains stronger in other domains. “o1 is not better as a writer, but it often capable of developing complex plots better than Sonnet because it can plan ahead better,” he said.

A Reddit user shared their experience after spending 8 hours testing OpenAI’s o1 Pro ($200) against Claude Sonnet 3.5 ($20) in real-world applications.

For complex reasoning, o1 Pro was the winner, providing slightly better results but taking 20-30 seconds longer per response. Claude Sonnet 3.5, while faster, achieved 90% accuracy on these tasks. In code generation, Claude Sonnet 3.5 outperformed o1 Pro, producing cleaner, more maintainable code with better documentation, whereas o1 Pro tended to overengineer solutions.

Similarly, Abacus AI chief Bindu Reddy said that Sonnet 3.5 still performs better than o1 in coding, based on manual tests she conducted since OpenAI has not yet released the API. 

“Early indications are that Sonnet 3.5 still rules when it comes to coding. We will be able to confirm this result whenever OpenAI chooses to make an API available,” she said.

The post Fine-Tuning is Dead, Long Live Reinforcement Fine-Tuning appeared first on Analytics India Magazine.

]]>
Llama 3.3 Just Made Synthetic Data Generation Effortless https://analyticsindiamag.com/deep-tech/llama-3-3-just-made-synthetic-data-generation-effortless/ Sat, 07 Dec 2024 05:07:01 +0000 https://analyticsindiamag.com/?p=10142650

‘My synthetic data cost goes down 30x’

The post Llama 3.3 Just Made Synthetic Data Generation Effortless appeared first on Analytics India Magazine.

]]>

Meta today unveiled Llama 3.3, a multilingual LLM to redefine AI’s role in synthetic data generation. Featuring 70 billion parameters, Llama 3.3 is as performant as the previous 405B model yet optimised for efficiency and accessibility.

Its multilingual output supports diverse languages, including Hindi, Portuguese, and Thai, empowering developers worldwide to create customised datasets for specialised AI models. 

“As we continue to explore new post-training techniques, today we’re releasing Llama 3.3 — a new open source model that delivers leading performance and quality across text-based use cases such as synthetic data generation at a fraction of the inference cost,” shared Meta, on X. 

Fuels Synthetic Data Generation 

Developers can now use its expanded context length of 128k tokens to produce vast and high-quality datasets, addressing challenges like privacy restrictions and resource constraints. 

Meta’s AI chief Yann LeCun previously said that this capability enables innovation in low-resource languages, a sentiment echoed by Indian entrepreneur Nandan Nilekani. “India should focus on building small, use-case-specific models quickly,” Nilekani said, highlighting Llama’s pivotal role in generating tailored training data for Indic language models.

The success of such approaches is evident in projects like Sarvam AI’s Sarvam 2B, which outperforms larger models in Indic tasks by utilising synthetic data generated with Llama. 

Hamid Shojanazeri, an ML engineer at Meta, said synthetic data generation solves critical bottlenecks in domains where collecting real-world datasets is too costly or infeasible. “Synthetic data is vital for advancing AI in privacy-sensitive areas or low-resource languages,” he added. With its RLHF tuning and supervised fine-tuning, Llama 3.3 produces instruction-aligned datasets for tasks requiring high precision.

Indic startups like Sarvam AI and Ola Krutrim have already reaped the benefits of Llama’s capabilities. Sarvam AI’s 2B model trained on 2 trillion synthetic Indic tokens demonstrates how such data can efficiently train smaller, purpose-built models while retaining high performance. 

“If you look at the 100 billion tokens in Indian languages, we used a clever method to create synthetic data for building these models using Llama 3.1 405B. We trained the model on 1,024 NVIDIA H100s in India, and it took only 15 days,” said Sarvam AI chief Vivek Raghavan in an interview with AIM.

Similarly, Llama 3.3’s multilingual support and scalability make it indispensable for bridging the data divide in underrepresented languages.

Llama 3.3’s ability to support synthetic data generation extends beyond niche use cases, fostering broader adoption among developers, educators, and businesses. “By reducing the cost of producing high-quality training data, Llama accelerates innovation globally,” said Ahmad Al-Dahle, Meta’s VP of Generative AI.

As speculation about GPT-4.5 intensifies, Llama 3.3 has decisively stepped in to meet immediate developer needs. With its revolutionary approach to synthetic data generation and cost-effectiveness, it’s clear that Llama 3.3 isn’t just filling a gap—it’s setting a new standard.

“My synthetic data cost goes down 30x,” said Pratik Desai, co-founder at KissanAI, on X. 

Laying the Groundwork for Llama 4

The release of Llama 3.3 fits squarely into Meta’s long-term AI strategy. As Zuckerberg revealed during Meta’s Q3 earnings call, the forthcoming Llama 4, set for early 2025, will introduce “new modalities, stronger reasoning, and much faster capabilities.” This suggests that synthetic data generation capabilities refined in Llama 3.3 could become even more robust in future iterations.

Meta’s VP Ragavan Srinivasan recently hinted at advancements in “memory-based applications for coding and cross-modality support” for future Llama models. The robust framework established by Llama 3.3’s synthetic data capabilities could be integral to these developments. By enabling developers to produce domain-specific training datasets, Meta positions itself as a critical enabler of innovation in both the private and public sectors.

Future Llama versions will likely support an even broader array of languages and specialised use cases. As synthetic data generation becomes central to AI development, tools like Llama Guard 3 and enhanced tokenisation methods will ensure safe, responsible usage.

For countries like India, where data creation in regional languages is critical, it offers an accessible pathway to developing culturally relevant AI solutions. 

Globally, as Mark Zuckerberg mentioned, Meta’s next-generation data center in Louisiana promises to drive even more ambitious AI advancements: “We are in this for the long term, committed to building the most advanced AI in the world.”

The post Llama 3.3 Just Made Synthetic Data Generation Effortless appeared first on Analytics India Magazine.

]]>
Haskell is a Goated Programming Language That No One Uses https://analyticsindiamag.com/deep-tech/haskell-is-a-goated-programming-language-that-no-one-uses/ Tue, 03 Dec 2024 13:04:41 +0000 https://analyticsindiamag.com/?p=10142355

Haskell thrives in academia, where its theoretical underpinnings are studied and expanded. But beyond that, you would rarely find someone talking about it being used in any industry.

The post Haskell is a Goated Programming Language That No One Uses appeared first on Analytics India Magazine.

]]>

Despite its limited adoption, Haskell is often called a ‘goated’ programming language, attracting admiration for its elegance and power. However, its niche appeal and steep learning curve keep it on the fringes of mainstream programming.

Haskell is a purely functional programming language renowned for its strong static typing, immutability, and lazy evaluation. 

But why do people not use Haskell anymore? “Haskell is perceived to be an impractical, academic language. Therefore, people refuse to use it and instead try to translate lessons they learned from it to ‘more practical’ languages such as C++,” a developer tried to answer that in a nutshell on Hacker News.

Thanks to Haskell’s academic roots and the fact that it was developed by a community of researchers who prioritised theoretical brilliance over practicality, its functional programming paradigm emphasises immutability, strong typing, and purity, making it a dream for advanced developers who thrive on mathematical elegance. 

But this was in 1980, when a group of researchers wanted to make a single language for functional programming, resulting in the birth of Haskell. In today’s world, however, there are only some monetary benefits to learning Haskell. 

According to the Stack Overflow Developer Survey 2024, Haskell developers receive a median salary of $68,337, which is right in the middle of the chart. However, according to the same survey, Haskell is only used by 2% of the total respondents.

The language still appeals to many people, but not for its monetary benefits. Apart from xmonad, darcs, pandoc, and a few others, not many software programs were made using Haskell. And none of them have gained widespread popularity.

Designed for a Few, Not Many

Haskell is frequently employed in finance to build high-assurance systems, where correctness is paramount. Its strong type system and functional purity reduce bugs, making it easier to reason about code.

Moreover, the language’s simple design makes it a natural fit for building compilers and tools that analyse other codebases. Unsurprisingly, Haskell thrives in academia, where its theoretical underpinnings are studied and expanded. But beyond that, you would rarely find someone talking about it being used in any industry.

The internet seems divided on Haskell’s role in the programming landscape. A user on X quipped, “What I read: person angry because they do not understand Haskell,” hinting at the elitist aura surrounding the language. Others, meanwhile, humorously celebrated figures like Lennart Augustsson, one of Haskell’s prominent contributors, as “one of the craziest Chad engineers ever born.”

Haskell vs Go is the new talk of the niche developer town that likes to argue about which language is better. Languages like Go were born out of pragmatic necessity. On the other hand, developers call Haskell the “three Chad engineers (one of whom is a cofounder of the industry with Dennis Ritchie) getting tired of waiting for C++ to compile.” 

Go’s straightforward syntax and focus on productivity cater to large teams with diverse skill levels, emphasising simplicity over sophistication. “Haskell is designed for a single advanced programmer. Go is designed for a big team of average ones,” remarked a developer, pointing at Haskell’s steep learning curve and limited accessibility. In contrast, Go, with its practical, team-friendly design, excels at delivering software efficiently. 

The divide between these philosophies underscores why Haskell remains a niche language, leaving even Python and JS behind. Though its power is undeniable, its usability often intimidates newcomers, while Go’s design philosophy prioritises onboarding developers—even those fresh out of college—quickly.

‘Why Can’t I Just Use Python?’

Developers say that people find it harder to convince others in their teams to use Haskell than simply go with what everybody else is using.

While Go, Python, and JS are celebrated for their pragmatism and adoption in large-scale projects, Haskell stands as a testament to what programming can achieve when designed with purity and elegance in mind. But in the current world, since no frameworks have been designed for the language, Haskell is on the possible brink of extinction.

Moreover, comparing programming languages has never benefited anyone anyway. “Haskell vs Go might be the worst, most nonsensical midwit debate I’ve ever seen,” said a user on X. 

Haskell’s influence far exceeds its usage. It continues to inspire the future of programming languages and tools. While it may never dominate the industry, it remains a cherished gem for those who dare to dive into its depths, proving that sometimes, brilliance doesn’t need mass appeal to leave a lasting legacy.

The post Haskell is a Goated Programming Language That No One Uses appeared first on Analytics India Magazine.

]]>
Is Java Slowly Running Out of Steam? https://analyticsindiamag.com/deep-tech/is-java-slowly-running-out-of-steam/ Tue, 03 Dec 2024 10:19:06 +0000 https://analyticsindiamag.com/?p=10142321

Python is now the top programming language on GitHub. 

The post Is Java Slowly Running Out of Steam? appeared first on Analytics India Magazine.

]]>

Last month, the GitHub Octoverse 2024 report revealed that Python surpassed Java as the most popular language. This was driven mainly by its dominance in data science, machine learning, and scientific computing, alongside the rise of generative AI projects. 

This brings us to question whether Java is slowly losing its charm. Python developers are smiling, as they are often known to make fun of Java for its notorious boilerplate code. C programmers laugh at the often ridiculous class names, and C++ users hate it because you can’t blow your office with manual memory management.  

Join India’s Biggest Developers Conference – MLDS 2025!
<Reserve Your Spot Now>

However, there is still hope for Java. According to a survey by the TIOBE Index, it is still among the top five most popular programming languages, along with Python and the C family of languages. 

There are thousands of job postings for experienced Java programmers in industries as diverse as fintech, healthcare, and travel. All of them are still choosing object-oriented Java as their primary programming language in their custom software development tech stack. 

Source: TIOBE

Java’s Popularity in 2024

Even after more than two decades, Java is still heavily used by many programmers, and plenty of products are being built using it. According to the April 2023 TIOBE Programming Community Index, which is an indicator of widely used programming languages, Java ranked third, trailing behind Python and C. 

Source: TIOBE

Similarly, the RedMonk Programming Language Rankings for Q1 ranked Java as the third most popular language, after JavaScript and Python, based on data from GitHub and Stack Overflow.

Source: RedMonk

Also, this year, an international job search site in the IT industry, Devjobsscanner, studied over 7 million vacancies and found that Java was in the top three most in-demand languages after JavaScript and Python. 

That’s not all, Java tops the list of most searched technologies in 80 out of 162 countries spanning Australia, Africa, South America, and Europe. 

Java in Building Generative AI 

Java is proving its strength in generative AI by evolving from simple scripts to powerful Spring Boot applications. Offering unmatched versatility and scalability for real-world use cases, Java stands out as a reliable choice alongside Python in the AI revolution.

While Python is known for its simplicity and readability, Java has a more verbose syntax and a steeper learning curve. Its static typing system, which provides robustness and error-checking at compile time, can be less intuitive for beginners. 

However, Java’s structured approach can lead to more maintainable and scalable code in large projects. 

Java’s compiled nature gives it a performance niche, making it suitable for applications where speed is crucial. Its Just-In-Time (JIT) compiler and efficient memory management through garbage collection contribute to its high performance. 

As we’ve already seen, it continues to remain a strong contender in the job market, especially in enterprise development. Its long-standing presence in the industry ensures a stable demand for Java developers. 

While Java has a rich ecosystem, particularly in enterprise applications, its data science libraries, like Weka and Deeplearning4j, are available but don’t match the breadth and depth of Python’s offerings. 

With a community equally sturdy in the enterprise sector, Java’s “Write Once, Run Anywhere” philosophy ensures high probability. It also has a strong support network, making it a reliable choice for long-term projects. 

A user working in the finance sector mentioned that their microservice backends for numerous modernisation projects are written in Java, adding that it’s not going anywhere. Another user shared that Java, a language widely adopted by everyone, is almost too difficult to change or introduce something new to.

Another said that Java isn’t even that old tech anymore. With a well-designed Lambda syntax, type inference, pattern matching, sum types, and decent APIs for functional programming, it has become a pretty “OK language”.

Recently, Oracle officially launched Java 22, the latest iteration of the world’s most widely used programming language and development platform.

Java 22 incorporates 12 JDK Enhancement Proposals (JEPs) that bring about language improvements, core libraries and tools capabilities, as well as performance updates. 

“By delivering enhancements that streamline application development and extend Java’s reach to make it accessible to developers of all proficiency levels, Java 22 will help drive the creation of a wide range of new applications and services for organisations and developers alike,” said Georges Saab, senior vice president, Oracle Java Platform. 

He also announced the return of JavaOne in 2025. 

The post Is Java Slowly Running Out of Steam? appeared first on Analytics India Magazine.

]]>
No One Has Dared Topple JSON in 23 Years https://analyticsindiamag.com/deep-tech/no-one-has-dared-topple-json-in-23-years/ Tue, 03 Dec 2024 05:22:54 +0000 https://analyticsindiamag.com/?p=10142224

“As Doug would say himself, JSON was discovered, not created.”

The post No One Has Dared Topple JSON in 23 Years appeared first on Analytics India Magazine.

]]>

Over two decades ago, Douglas Crockford created JSON (JavaScript Object Notation), and the world hasn’t been the same. People might face problems with JSON, but nothing has come close to matching its simplistic yet tremendous capabilities.

“It’s crazy how even after 23 years, devs haven’t created anything better than JSON for the human-readable serialisation/config format,” Dmitrii Kovanikov, senior SWE at Bloomberg, posted on X, speaking about the legacy of this text-based format for storing and exchanging data for human readability. 

The two closest alternatives to JSON are XML and YAML, but according to Corey Butler, a seasoned developer who is currently building Runtime, they don’t even come close to the popularity. “XML syntax is verbose and interest in it has declined for years. JSON was introduced as a lighter-weight replacement for XML, but it hasn’t completely killed off XML yet,” he said.

Butler said that YAML is somewhat popular, but it’s not really an ‘apples to apples’ comparison. “It’s close though. It’s a popular format for small-form configurations… For bulk data transfer, CSV and tab-delimited formats are popular, but they do not provide as much metadata as either JSON or XML.”

A user said wittingly, “As Doug [Douglas Crockford] would say himself, JSON was discovered, not created.”

‘Why Fix What Isn’t Broken?’

Interestingly, Kovanikov is also working on a project related to simple config language based on Category Theory principles. “Today, in my dream, the solution came to me. Everything clicked. I’m going to share the project soon,” he said. 

The biggest reason for JSON’s success, even though it came years after XML, is its simplicity.  JSON is simple, effective, and universally supported. “Every time someone tries to reinvent the wheel, they just make it bloated or overly complicated. YAML? A nightmare of whitespace errors. XML? Bloated bureaucratic garbage,” said a user on X.

Several articles on the web claim that JSON is slow compared to its alternatives, but the truth is, there are no real stand-alone alternatives to JSON, be it Protocol Buffers, MessagePack, BSON, or Avro. 

JSON’s human-readable format, language-agnostic nature, and support for APIs make it irreplaceable in web development. 

People complain about the lack of a comment section in JSON, which is the only thing that it possibly lacks. But the truth is that it was designed just for data, not comments. “All of a sudden, we are sending 50% comments in our payloads, so it’s no longer “better “. The design is stupid-proof. I like it,” said a user on X.

Maybe there is a startup somewhere whose “sole purpose is to provide a proprietary serialisation/config format file backed with AI technology”. As AI seems to penetrate everything, it seems like the inefficiency of large datasets might become a problem for JSON. Parsing files into the structure can also be slow due to the need to load the entire structure into memory. 

Is Anyone Even Trying?

In an attempt to overthrow the JSON legacy, Gene Thomas from Planet Earth Software released Xenon 1.0, touting it as an alternative to XML, JSON, and YAML and “the best way to represent information.”

But following the announcement and discussion on Hacker News, things didn’t turn out well for Xenon. The article, which eventually got flagged, received a lot of criticism. Kang Seonghoon, a developer who tried building a JSON alternative earlier, said that there are several things that are just wrong with the format. 

Xenon’s requirement for a mandatory BOM is problematic, especially for non-Windows users, as it is invisible by definition. The format’s arbitrary naming conventions raise questions, such as whether tabs are acceptable in names. There is a lack of clarity in distinguishing between the wire format and the serialisation protocol, leading to confusion about data types. These are some among several other problems highlighted in the comments.

Another developer who goes by the name ‘OftenSometimes’ on Dev.to, tried to build a new approach called Cbot (Character Based Object Transport) protocol, which has yet to be tested and released. According to the creator, JSON has several limitations, which he claims is “a bad habit of not guaranteeing anything”, where the user can input any name without verifying if it is connected to a string.

Cbot is designed to only have a character-based and machine-readable format. “It has a predictable and straightforward syntax, and it could be seen as a kind of small assembly language,” said the creator. But there is yet to be a tangible version created. 

While people keep trying to build a JSON alternative, one thing is for sure—the 23-year legacy is hard to break.

The post No One Has Dared Topple JSON in 23 Years appeared first on Analytics India Magazine.

]]>
Every App is Now an AI App https://analyticsindiamag.com/deep-tech/every-app-is-now-an-ai-app/ Thu, 28 Nov 2024 07:23:33 +0000 https://analyticsindiamag.com/?p=10141867

Microsoft, Amazon, and other tech giants are shaping a future where AI powers every app, offering tools to build the next generation—while ensuring you play by their rules.

The post Every App is Now an AI App appeared first on Analytics India Magazine.

]]>

At Microsoft Ignite 2024, held last week, the company not only focused on Copilots and AI agents but also made the effort to provide developers with an environment to build AI applications. 

On the first day of the conference, the tech giant introduced the Azure AI Foundry, formerly known as Azure AI Studio, a first-class app server for the AI age where developers can design, customise, and manage AI apps and agents. 

“Every application is an AI application, and every new generation of apps has brought a changing set of needs,” Microsoft CEO Satya Nadella said during his keynote. 

The idea of Azure AI Foundry is that everyone within an organisation—not just developers and AI engineers, but also professionals and regular workers—will be able to access Microsoft’s latest AI developments. Besides, business leaders and IT pros will have access to tools that provide insights into the impact of AI on their business, the company said.

Azure AI Foundry is a leading app server for designing and managing AI apps and agents.

Going Beyond Microsoft’s Azure AI Foundry

Microsoft is not alone in this race. Amazon Bedrock provides access to a wide range of foundational models from various third-party providers, including Anthropic, Stability AI, and Hugging Face, in addition to Amazon’s own models like Titan. 

Users can customise and fine-tune them to suit specific use cases, making them highly adaptable for diverse applications. The integration with AWS services further improves its functionality. 

Bedrock has been noted to offer significant cost savings compared to Azure OpenAI in certain scenarios, with reported savings of up to 556%. 

Azure AI Foundry mainly uses OpenAI’s models, including the GPT series, allowing for strong natural language processing capabilities. It emphasises user-friendly integration with Microsoft services and other external applications through an API ecosystem. 

Azure offers improved security measures, including private networking options that cater to enterprise needs. 

However, Bedrock tends to offer more flexibility regarding deployment options and model customisation, which can be beneficial for complex workflows. Azure AI Foundry, while also scalable, is particularly strong in environments that require integration with existing Microsoft tools and services, making it a preferred choice for organisations to stay rooted in the Microsoft space.

Both platforms operate on a pay-as-you-go pricing model. 

But, Bedrock generally provides more competitive pricing across various scenarios, especially when using its extensive model offerings and serverless capabilities. Azure AI Foundry, while potentially more expensive depending on usage patterns, offers robust features that justify its costs for enterprises focused on security and integration with Microsoft products.

More AI Models 

Access to the most powerful AI models will be needed to build AI agents, and Microsoft is ensuring this by capitalising on its partnership with OpenAI. Microsoft incorporated models like GPT-4 and Codex into the Azure OpenAI Service. These offer a comprehensive suite of AI tools seamlessly integrated with Azure’s cloud platform.

Azure OpenAI Service provides access to OpenAI’s models including the GPT-4, GPT-4o mini, GPT-4, GPT-4 Turbo with Vision, DALLE-3, and Whisper. It also opens up model series with data residency, scalability, safety, security, and the enterprise capabilities of Azure, which can now be accessed via the Azure OpenAI Service API. 

Vertex AI, a friendly nemesis was made to build, deploy, and scale machine learning models faster and easier. Whereas, Azure AI is a similar cloud-based service platform offering similar features. But what’s the difference between the two? 

Vertex AI’s advantages is its support for MLOps practices. Additionally, it optimises infrastructure, reducing training time and costs. The platform offers comprehensive ML tooling, including APIs, foundation models, and open-source models through the Model Garden

On the other hand, Azure is a key component in providing developers with tools to construct, train, and implement machine learning models. It supports popular programming languages like Python and R, enabling efficient model building and deployment. Vertex AI also underlines the integration of data and AI, adding popular tools like BigQuery, Dataproc, and Spark. 

On the engineering side, Azure Databricks allows effective processing and analysis of large datasets. 

Why Azure AI Foundry Rocks 

One of the key highlights from the event and a standout feature is that Azure AI has now been integrated with GitHub and Copilot Studio, with AI directly embedded into developers’ workflows. This powerful combination gives developers end-to-end experience to build, scale, and deploy secure software to the cloud.

With over 1,800 AI models available for experimentation, the platform increases prototyping and innovation by providing a controlled environment for exploring and applying AI solutions. It also simplifies AI integrations for developers, eliminating the need to piece together disparate tools and models.

“With features like bring your own storage and private networking, it will ensure data privacy and compliance to help organisations protect their sensitive data,” said the company. 

The Azure AI Foundry SDK provides a simplified coding experience that enables developers to merge components together to build AI applications, wherever they build, whether it’s GitHub, Visual Studio, or Copilot Studio. It supports the development journey from idea to code to cloud. 

Today, it is also available in Python and C, with a JavaScript version coming soon. In this initial release, the SDK includes Azure OpenAI, AI model inferencing, Azure AI Search, Azure AI Agent Service, Evaluation, Tracing, and AI app templates.

The quickest way to get started with AI is to play with the models. GitHub models allow any developer to start for free. One can dive right in from Visual Studio by installing the AI Toolkit for VS Code. 

Source: Microsoft

Jessica Hawk, Microsoft’s corporate vice president of data, AI, and digital applications, wrote in a blog post that Azure AI Foundry is designed to help companies overcome barriers to AI adoption. Citing Deloitte Touche Ltd, she noted that 70% of organisations have moved 30% or less of their GenAI experiments into production due to implementation challenges. 

To tackle these obstacles, Azure AI Foundry provides Azure Essentials, a unified portal that combines Microsoft’s best practices, product insights, reference architectures, and resources. This portal offers businesses a standardised approach to adopting and scaling AI solutions. 

“As AI transforms industries and unveils new opportunities, we’re committed to providing practical solutions and powerful innovation to empower you to thrive in this evolving landscape,” said Hawk. 

“Everything we’re delivering today reflects our dedication to meeting the real-world needs of both developers and business leaders, ensuring every person and every organisation can harness the transformative power of AI.”

The post Every App is Now an AI App appeared first on Analytics India Magazine.

]]>
How AI Dragons Set GenAI on Fire This Year https://analyticsindiamag.com/deep-tech/how-ai-dragons-set-genai-on-fire-this-year/ Wed, 27 Nov 2024 09:30:00 +0000 https://analyticsindiamag.com/?p=10141761

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries.

The post How AI Dragons Set GenAI on Fire This Year appeared first on Analytics India Magazine.

]]>

If you thought the buzz around AI would die down in 2024, think again. Persistent progress in hardware and software is unlocking possibilities for GenAI, proving that 2023 was just the beginning.

2024 – the Year of the Dragon — marks an important shift as GenAI becomes deeply woven into the fabric of industries worldwide. Businesses no longer view GenAI as just an innovative tool. Instead, it is being welcomed as a fundamental element of their operational playbooks. CEOs and industry leaders, who recognise its potential, are now focused on seamlessly integrating these technologies into their key processes.

This year, the landscape evolved rapidly and generative AI became increasingly indispensable, progressing from an emerging trend to a fundamental business practice.

Scale and Diversity

An important aspect is the growing understanding of how GenAI enables both increased volume and variety of applications, ideas and content. 

The overwhelming surge in AI-generated content is leading to consequences we are just starting to uncover. According to reports, over 15 billion images were generated by AI in one year alone – a volume that once took humans 150 years to achieve. This highlights the need for the internet post-2023 to be viewed through an entirely new lens.

The rise of generative AI is reshaping expectations across industries, setting a new benchmark for innovation and efficiency. This moment represents a turning point where ignoring the technology is not just a lost opportunity, but could also mean falling behind competitors.

“The top open source models are Chinese, and they are ahead because they focus on building, not debating AI risks,” said Daniel Jeffries, chief technology evangelist at Pachyderm. 

China’s success is underpinned by its focus on efficiency and resource optimisation. With limited access to advanced GPUs due to export restrictions, Chinese researchers have innovated ways to reduce computational demands and prioritise resource allocation. 

“When we only have 2,000 GPUs, the team figures out how to use it,” said Kai-Fu Lee, AI expert and CEO of 01.AI. “Necessity is the mother of innovation.” 

He further highlighted how his company transformed computational bottlenecks into memory-driven tasks, achieving inference costs as low as 10 cents per million tokens. “Our inference cost is one-thirtieth of what comparable models charge,” Lee further said. 

The rise of Chinese AI extends beyond its borders, with companies like MiniMax, ByteDance, Tencent, Alibaba, and Huawei targeting global markets. 

MiniMax’s Talkie AI app, for instance, has 11 million active users, half of whom are based in the US. 

At the Wuzhen Summit 2024, analysts noted that as many as 103 Chinese AI companies were expanding internationally, focusing on Southeast Asia, the Middle East, and Africa, where the barriers to entry were lower than the Western markets. 

ByteDance has launched consumer-focused AI tools like Gauth for education and Coze for interactive bot platforms, while Huawei’s Galaxy AI initiative supports digital transformation in North Africa. 

AI Video Models 

Models like Kling and Hailuo have outpaced Western competitors like Runway in speed and sophistication, which represents a shift in leadership in this emerging domain. This is reflected in advancements in multimodal AI, where models like LLaVA-o1 rival OpenAI’s vision-language models by using structured reasoning techniques that break down tasks into manageable stages.

The Rugged Boundary

In 2023, it became clear that generative AI is not just elevating industry standards, but also improving employee performance. According to a YouGov survey, 90% of workers agreed that AI boosts their productivity. Additionally, one in four respondents use AI daily, with 73% using it at least once a week.

Another study revealed that when properly trained, employees were able to complete 12% of tasks 25% faster with the assistance of generative AI, while the overall quality of their work improved by 40%. The greatest improvements were seen among low-skilled workers. However, for tasks beyond AI’s capabilities, employees were 19% less likely to produce accurate solutions.

This dual nature has led to what experts call the ‘jagged frontier’ of AI capabilities.

On one side, AI now performs impressive abilities and tasks with remarkable accuracy and efficiency that were once deemed beyond machines’ reach. On the other hand, however, it struggles with tasks that require human intuition. These areas, defined by nuance, context, and complex decision-making, are where the binary logic of machines currently falls short.

Cheaper AI

As enterprises begin to explore the frontier of generative AI, we might see more AI projects take shape and become standard practice. This shift is driven by the decreasing cost of training LLMs, thanks to advancements in silicon optimisation, which is expected to halve every two years. Alongside growing demand and global shortages, the AI chip market is set to become more affordable in 2024, with new alternatives to industry leaders like NVIDIA emerging.

Moreover, new fine-tuning techniques such as self-play fine-tuning are making it possible to strengthen LLMs without relying on additional human-defined data. These methods use synthetic data to develop better AI with fewer human interventions.

Unveiling the ‘Modelverse’

The decreasing cost is enabling more companies to develop their own LLMs and highlighting a clear trend towards accelerating innovation in LLM-based applications in the next few years.

By 2025, we will likely see the emergence of locally executed AI instead of cloud-based models. This shift is driven by hardware advances like Apple Silicon and the untapped potential of mobile device CPUs.

In the business sector, SLMs will likely find greater adoption by large and mid-sized enterprises because of their ability to address niche requirements. As implied by their name, SLMs are more lightweight than LLMs. This makes them perfect for real-time applications and easy integration across various platforms.

While LLMs are trained on massive, diverse datasets, SLMs concentrate on domain-specific data. In such cases, the data is often from within the enterprise. This makes SLMs tailored to industries or use cases, thereby ensuring both relevance and privacy. 

As AI technologies expand, so do concerns about cybersecurity and ethics. The rise of unsanctioned and unmanaged AI applications within organisations, also referred to as ‘Shadow AI’, poses challenges for security leaders in safeguarding against potential vulnerabilities.

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries. This shift is expected to bring significant operational benefits, including improved risk assessment and enhanced decision-making capabilities.

Organisations are encouraged to view AI as a collaborative partner rather than just a tool. By effectively training ‘AI dragons’ to understand their capabilities and integrating them into workflows, businesses can unlock new levels of productivity and innovation.

The rise of AI dragons in 2024 represents a significant evolution in how AI is perceived and utilised. As organisations embrace these technologies, they must balance innovation with ethical considerations, ensuring that AI serves as a force for good.

The post How AI Dragons Set GenAI on Fire This Year appeared first on Analytics India Magazine.

]]>
AI PoCs are Waste of Money https://analyticsindiamag.com/deep-tech/ai-pocs-are-waste-of-money/ Mon, 25 Nov 2024 13:41:35 +0000 https://analyticsindiamag.com/?p=10141675

“Models running in production environments prove value. PoCs are like college projects for business."

The post AI PoCs are Waste of Money appeared first on Analytics India Magazine.

]]>

As 2024 comes to a close, many enterprise and IT companies are moving their AI PoCs to production. It took them an average of two years to find use cases and offer products to enterprises. Given this delay, building all these solutions in-house now seems questionable, raising concerns among experts. 

With the debate around the value of AI PoCs intensifying, industry leaders offer contrasting views on whether they represent a prudent investment or a waste of resources. Critics argue that PoCs rarely deliver scalable value, while proponents emphasise their role in validating innovative AI solutions.

Vin Vashishta, AI advisor and author of From Data to Profit, is among the staunch critics of PoCs. He questioned their purpose, stating: “What’s the point of an AI PoC other than to make consulting companies rich? PoCs don’t deliver revenue and can’t be scaled in production,” he said, adding that while the value of such PoCs is 0, he still hears people trying to justify them. 

Credits: Vin Vashishta

If Not PoCs, Then What?

According to Vashishta, businesses should focus on simpler initiatives that build capabilities and deliver quantifiable results, rather than sinking money into PoCs that often lack direction or measurable outcomes.

Instead of PoCs, Vashishta advocates for alternative approaches such as leveraging vendor demos, trialling AI tools, and conducting educational sessions like seminars to introduce AI capabilities to businesses. He argued, “Every PoC objective can be achieved faster and for less with common-sense solutions. Don’t buy into the PoC money pit.”

This sentiment has largely been prevalent in the Indian IT ecosystem. Most of them started out building AI PoCs and products, trying to mimic the success of big-tech companies like Google and Microsoft, or startups like OpenAI and Anthropic. Sooner or later, these IT giants realised that it was easier to build these products using their technologies, as it was easier to transition from PoCs to products quickly.

In their latest earnings calls, Infosys, TCS, Wipro, HCLTech, and Tech Mahindra announced that their AI PoCs were moving into production. Most of them built them using Meta’s Llama and NVIDIA NIM frameworks and have also partnered with AWS and other cloud providers.

As for the enterprises, they are still trying to build such PoCs and move them to production, which according to Vashishta is a waste of time and resources.

The Case for PoCs

Not everyone agrees with the anti-PoC stance. Stefan Ojanen, an AI product leader and MLOps expert, defends PoCs as a critical step in deploying great AI models. “When working with bleeding-edge AI/ML tech, it’s fundamentally impossible to predict a solution’s efficacy without empirical validation in your context,” he said.

Meanwhile, Vijay Raaghavan, the head of enterprise innovation at Fractal, told AIM that this transition from PoCs to real-world applications has presented new challenges, particularly when it comes to measuring value, which is still the toughest part of driving investment in generative AI. He outlined a multi-layered approach: “Generative AI is not a plug-and-play solution. It requires the right data, hyperscale strategy, and long-term commitment.”

According to some of these voices, PoCs test AI solutions within the business’s specific environment, uncovering edge cases and architectural challenges. By identifying the limitations early on, they reduce the risk of failures during full-scale implementation. PoCs explore high-risk, high-reward opportunities, offering insights that boilerplate solutions from others might not provide.

“Dismissing PoCs as worthless is intellectually dishonest,” Ojanen asserted. “PoC is the first iteration. It’s a way to discover asymmetric advantages.”

A Middle Ground?

The tension between critics and supporters often centres around how PoCs are positioned within the broader AI strategy. Critics believe that if a PoC successfully demonstrates value, it ceases to be a PoC and becomes a product. “Models running in production environments prove value. PoCs are like college projects for business,” one argued.

On the other hand, proponents like Fabian Leon Ortega, co-founder and CTO at SunDevs, view PoCs as stepping stones to innovation. He pointed out that PoCs in customer support for telco companies have successfully evolved into production systems. “Thanks to the PoC, stakeholders could see the real value and potential of AI, applying the technology to their own business cases,” Ortega explained.

Vashishta’s criticism extends to the financial implications of PoCs. He contends that companies often treat PoCs as a substitute for robust architecture and design processes, leading to inefficiencies. “If you find a high-reward space, shouldn’t you explore it with a product customers will pay for rather than a PoC that doesn’t provide any new information?” he asked.

Advocates for PoCs say that they can be cost-effective when executed strategically. “I’d rather whip up an 8B model locally with sample data for a few thousand dollars than commit to a million-dollar project without validating assumptions first,” Ojanen remarked.

While PoCs can provide valuable insights and reduce uncertainty, their success hinges on clear objectives and alignment with business goals. Without these, they risk becoming a “money pit,” as Vashishta warns.

The post AI PoCs are Waste of Money appeared first on Analytics India Magazine.

]]>
AI and Quantum are Going to be a Lethal Combination https://analyticsindiamag.com/deep-tech/ai-and-quantum-are-going-to-be-a-lethal-combination/ Mon, 25 Nov 2024 10:14:55 +0000 https://analyticsindiamag.com/?p=10141663

“When a quantum computer that can crack the current cyber security systems comes up, you will be completely open to attacks,” said Ajai Chowdhry, chairman–Mission Governing Board of National Quantum Mission.

The post AI and Quantum are Going to be a Lethal Combination appeared first on Analytics India Magazine.

]]>

Recently, HCL co-founder Ajai Chowdhry, popularly known as the ‘Father of Indian Hardware’, spoke to AIM about India’s quantum mission and highlighted the magic AI can bring to quantum computing. He referred to the AI and quantum blend as a “lethal” one. 

Major tech companies like Google and Microsoft are actively developing this combination to drive the next wave of the AI revolution.

Big Tech’s AI Quantum Leap

Quantum computing has accelerated developments across varied sectors, such as drug discovery and security. Chowdhry highlights that quantum computers are especially suited for drug development, significantly reducing both the time required for drug discovery and the cost of medications.

Drug Discovery and Cybersecurity

Google Quantum AI, an initiative that started over a decade ago to advance quantum computing and its applications, focuses on developing algorithms and tools to solve complex problems beyond the reach of traditional computers. It has made significant strides in drug discovery. 

Source: X

Google’s Sycamore processor and other quantum models are being used to simulate chemical reactions to accelerate the process of identifying new drugs. Furthermore, the future versions of AlphaFold could leverage quantum algorithms to process complex biological data and investigate protein structure spaces that classical methods cannot compute.

Cybersecurity is another key area where quantum plays a pivotal role. “When a quantum computer which can crack the current cyber security systems comes up, you will be completely open to attacks. This can happen in four to six years. When a powerful quantum computer is ready, it can crack your financial systems, your security systems, and everything else,” explained Chowdhry. 

Rahil Patel, chief growth officer of QNu Labs, a quantum-safe data security company, said in an interaction with AIM that AI acts as an enabler in quantum fields, especially in cryptography and secure communications. He noted that while AI could pose risks by potentially breaking traditional encryption methods, quantum technologies can provide protection against these threats.

Recently, Chinese researchers claimed to have used a D-Wave quantum computer to successfully attack substitution-permutation network (SPN) algorithms. They posed a significant threat to encryption standards such as RSA and AES, which are widely used in the banking and military sectors.

Development Continues

Google DeepMind recently introduced AlphaQubit, an AI-based system designed to improve the reliability of quantum computers by identifying and decoding errors with high accuracy. 

Detailed in the paper ‘Learning high-accuracy error decoding for quantum processors’, AlphaQubit uses a neural network to decode the surface code, setting new benchmarks in error suppression for quantum systems. This development is considered to be a huge breakthrough in how AI can be advanced for quantum. 

Similarly, Microsoft, in collaboration with Atom Computing, has introduced a quantum system with 24 logical qubits, which brings Microsoft closer to fault-tolerant quantum computing. CEO Satya Nadella stated that 100 reliable qubits will mark the achievement of scientific quantum advantage.

Tech giant IBM has been at the forefront of quantum advancements. The company is working on quantum machine learning, where quantum algorithms are used to accelerate the training of AI models. It has also built an AI-powered quantum chatbot named Qiskit Code Assistant. 

Not That Simple

Source:X

While the potential of quantum computers combined with AI seems promising, the challenge of building it remains an obstacle considering how capital-intensive it is. In India, not just capital, but a good collaboration between various institutes is also required for this.

Chowdhry explains the Indian quantum mission to be a unique policy that offers startup funding of up to ₹25 crore, which is different from the traditional grants of ₹50 lakh to ₹2 crore. “That [traditional grants] kind of money is absolutely useless in quantum because quantum needs research and it needs working with education and research institutions,” he said. 

Looking at where India stands in the whole quantum computing scene, Chowdhry said that though it is lagging a bit, many countries are, in fact, nowhere close to India. “It’s just that five to seven countries are ahead of us.”

The post AI and Quantum are Going to be a Lethal Combination appeared first on Analytics India Magazine.

]]>