Stocks
ETF Heatmap
Hot ETFs are affected by AI.
Ecosystem map
Semis, model labs, cloud, integrators, and application leaders โ how value flows across the stack.
Earnings to watch
Signals on demand, capex plans, and AI product traction for key companies this quarter.
Key suppliers
Compute, memory, networking, and power constraints shaping the cycle โ and the vendors positioned to benefit.
Stocks
This page tracks investable themes across the AI stack. Treat this as an industry map and an earnings prep sheet rather than advice.
Ecosystem map
Think in layers so you can see where value accrues and where bottlenecks appear.
Layer | Role | Representative names | What to watch |
---|---|---|---|
Semiconductors | Training and inference compute plus core enablers | Nvidia AMD Intel Broadcom TSMC ASML | Accelerator roadmaps H200 B200 MI300 foundry lead time advanced packaging capacity node mix |
Model labs | Foundation and reasoning models offered by API or open weights | OpenAI Google DeepMind Anthropic Meta xAI Alibaba Qwen | Quality at equal spend context window tool use safety posture pricing and quota |
Cloud and infra | GPU fleets storage networking orchestration and billing | AWS Azure Google Cloud Oracle Cloud | AI service revenue disclosure GPU hours utilization regional capacity new instance types |
Integrators and services | Enterprise rollout consulting data plumbing and change management | Accenture Deloitte Infosys TCS | Backlog growth attach of AI modules deal size and duration |
Applications | Vertical and horizontal software with AI features | Adobe Salesforce ServiceNow SAP CrowdStrike Datadog Palantir | AI seat mix price uplift usage of premium features hard outcome metrics |
Earnings to watch
Use the following signal list when reading transcripts and slides. Focus on demand intent supply reality and product traction.
Semiconductors
- Data center revenue mix training versus inference unit shipments and average selling price
- Allocation policy for top customers and visibility on next quarter supply
- Roadmap timing for H200 B200 and MI300 class parts plus software stack progress
- Advanced packaging updates including CoWoS capacity substrate supply and yield
Cloud and infra
- Contribution from AI services in absolute terms and as a share of cloud revenue
- GPU hours booked queue length for new capacity and credits for startups
- Inference platforms and managed agents offered to enterprises and developers
- Capital expenditure for buildouts with detail on regions and power access
Model labs
- New model releases and measured gains on public evals at strict settings
- Pricing tiers quota policy and partner channel momentum
- Safety investments red team cadence and incident reporting approach
- Inference margin and the mix between direct API and cloud partner routes
Integrators and services
- Booked backlog that is clearly AI related and percent of total
- Time to value on pilots and conversion rates into production
- Reusable accelerators and reference architectures that shorten delivery
Applications
- Paid adoption of AI features and per user uplift versus base plans
- Outcome metrics such as resolved tickets code merged fraud blocked or time saved
- Model spend as a percent of revenue and unit economics at scale
Key suppliers
Compute memory networking and power are the main constraints that shape each cycle. Track these to understand supply risk and pricing power.
Compute
- Foundry and packaging TSMC for advanced nodes and CoWoS availability plus substrate supply from names like Ibiden and Unimicron
- Board makers and servers Supermicro Dell HPE and major original design manufacturers such as Quanta and Wiwynn
- Cooling adoption air to liquid transitions and facility retrofits needed for high density racks
Memory
- HBM bit growth and mix shift to HBM3E with clarity on supply from SK hynix Samsung and Micron
- Pricing and contract length for accelerator bundles where memory is the binding constraint
- Throughput of test burn in and known good stack rates
Networking
- Switch silicon roadmaps including 400 G and 800 G Ethernet and InfiniBand upgrades
- Optics availability for active and pluggable modules and supply from Coherent and Lumentum
- Data center fabric choices and the share of Ethernet versus InfiniBand for training and inference
Power and facilities
- Utility interconnect lead times and substation upgrades near cloud regions
- Power delivery units and uninterruptible power from Eaton and Schneider Electric and peers
- Thermal management and building systems from Vertiv and other specialists
Watch items that move the whole stack
- Any surprise in HBM supply or yields since this sets the pace for many GPU deliveries
- Packaging expansion success at foundries since this clears backlogs for accelerator boards
- Regional power constraints and permitting timelines since these gate new cloud capacity
- Shift from training to inference spend which changes mix for chips memory optics and software