Many higher education institutions are developing teaching curriculums for Data Science and AI/ML degrees. With more classes being held every semester, the need for computing environments that provide a positive student experience for hands-on exercises, workshops and assignments is in higher demand. There is also a steady and ever-growing demand for GPU resources by the larger research community at these same institutions. Cambridge Computer's HPC/AI team has partnered with several technology companies and higher education customers to deliver a shared resource that leverages new GPU sharing, scheduling technologies and disaggregated composable solutions that allow better utilization of new and existing GPU resources. These solutions will not only reduce cost by better utilizing existing GPU assets, but will also provide a solution that delivers flexibility and state of the art AI/ML infrastructure solutions for research computing. Agenda: * Introduction to new GPU scheduling and sharing technologies -- Run:AI * Delivery methods in both native Kubernetes scheduler as well as more traditional schedulers * Multi Tenancy for research and teaching * GPU sharing with native Nvidia tools and new dynamic options * Some sample use cases and deployed designs for compute, interconnect, storage, and software options * Conclude Q&A - 15 min