Responsibilities will include, but are not limited to: Build and maintain relationships with ecosystem partners including hyperscalers, accelerator vendors, IP providers, and academic collaborators. Develop deep technical expertise in AI/ML and LLM computing architectures, including training/inference pipelines, memory bandwidth requirements, and compute-memory interconnects. Work with technical experts to analyze emerging AI workloads and software frameworks to identify memory subsystem bottlenecks and opportunities for HBM optimization. Develop and present technical briefings and technology roadmap SWOT reviews to senior leadership and technical stakeholders. Contribute to pathfinding efforts for future HBM generations, including co-packaged memory, chiplet architectures, and advanced packaging solutions. Collaborate with internal architecture, design, and product teams to align HBM roadmap features with AI ecosystem needs. Experience engaging with customers, partners, and industry consortia at a deep technical level. PhD or Master's degree in Electrical Engineering, Computer Engineering, or related field. 15+ years of experience in semiconductor technology. Demonstrated impact in shaping technology roadmaps at the organizational level.