One of the most exciting developments in Generative AI is that, in our continuous progress, advancements have gotten predictably better with scale and are, “...nowhere near the point of diminishing.” However, while this notion fuels enthusiasm in new, cutting-edge capabilities, there is an arguably equally impactful implication that receives less mainstream hype: As we improve our compression of AI models, previously-cutting-edge capabilities can be stored in local hardware, thus returning to the “Zero Marginal Costs” model that’s defined Silicon Valley.
Share this post
Gen AI Cost Structures & Pricing Strategies…
Share this post
One of the most exciting developments in Generative AI is that, in our continuous progress, advancements have gotten predictably better with scale and are, “...nowhere near the point of diminishing.” However, while this notion fuels enthusiasm in new, cutting-edge capabilities, there is an arguably equally impactful implication that receives less mainstream hype: As we improve our compression of AI models, previously-cutting-edge capabilities can be stored in local hardware, thus returning to the “Zero Marginal Costs” model that’s defined Silicon Valley.