
This week’s new tools, tutorials, and resources 👇
Fisheye Effect: Apply fisheye lens distortion to your images with adjustable curvature, vignetting, blur, and chromatic aberration (FREE).
Shapeohloic: Elevate your designs with 140+ free, customizable SVG shapes (FREE).
ChatGPT Translate: Translate text between languages with context-aware accuracy (FREE).
FROM OUR FRIENDS @ MINDSTREAM
Turn AI Into Extra Income
You don’t need to be a coder to make AI work for you. Subscribe to Mindstream and get 200+ proven ideas showing how real people are using ChatGPT, Midjourney, and other tools to earn on the side.
From small wins to full-on ventures, this guide helps you turn AI skills into real results, without the overwhelm.
Interested in sponsoring our newsletter? Book an ad here.
TOP STORY
🌍 Google releases “Project Genie”

This week, Google released "Project Genie," an experimental prototype that lets you create and explore AI-generated interactive worlds in real time.
Powered by Google DeepMind's Genie 3 world model, the tool transforms text prompts and images into navigable 3D environments. Think walking through a fantasy forest, driving across volcanic terrain, or exploring ancient Athens, all generated on the fly as you move.
A "world model" is an AI system that simulates how an environment evolves and how your actions affect it. Unlike static 3D snapshots, Genie 3 generates the path ahead in real time as you move, at 24 frames per second.

With Project Genie, you can:
Sketch your world: Describe an environment and character with text, upload an image, or hit "Roll the dice" for a surprise world. You can preview and fine-tune the scene before jumping in.
Explore in real time: Navigate your generated world using WASD controls, with the AI creating new scenery as you move (maintaining visual consistency for up to a minute of exploration).
Remix existing worlds: Build on top of curated gallery worlds or your own creations, then download videos of your explorations to share.
Project Genie is currently available to Google AI Ultra subscribers in the U.S. (18+), with plans to expand access to more regions.
Here are some cool examples that users have generated 👇
Each exploration session is limited to 60 seconds, and Google notes that world realism, character control, and latency are still being improved in this early prototype.
FROM OUR FRIENDS @ MORNING BREW
Like coffee. Just smarter. (And funnier.)
Think of this as a mental power-up.
Morning Brew is the free daily newsletter that helps you make sense of how business news impacts your career, without putting you to sleep. Join over 4 million readers who come for the sharp writing, unexpected humor, and yes, the games… and leave feeling a little smarter about the world they live in.
Overall—Morning Brew gives your business brain the jolt it needs to stay curious, confident, and in the know.
Not convinced? It takes just 15 seconds to sign up, and you can always unsubscribe if you decide you prefer long, dull, dry business takes.
Interested in sponsoring our newsletter? Book an ad here.
ARTIFICIAL INTELLIGENCE
✏️ KREA releases “Realtime Edit”

This week, KREA AI released "Realtime Edit," a tool that lets you edit images with complex text instructions (and see results update instantly as you type).
Unlike traditional AI image tools, where you submit a prompt and wait for results, Realtime Edit responds in under 50 milliseconds.
That means the image transforms character-by-character as you describe what you want, creating an interactive feedback loop that feels more like drawing than prompting.

With Realtime Edit, you can:
Transform sketches into images: Draw simple shapes or stick figures on the canvas and watch them become photorealistic renders in milliseconds.
Edit with natural language: Type instructions like "re-create this sketch into a snake made of cucumber" and see the AI interpret your intent live.
Work alongside other tools: Use Screen Mirroring to capture windows from Blender, Photoshop, or Figma (turning rough 3D mockups into polished renders in real time).
Choose from multiple AI models: Select from 10 editing models, including Nano Banana Pro, Flux Kontext, and Qwen, depending on your needs.
The tool is available for free with a KREA AI account, with paid plans ($10–$60/month) offering additional features and commercial usage rights.
Here are some cool examples of what you can do with “Realtime Edit” 👇
FROM OUR FRIENDS @ THE DEEPVIEW
Stop Drowning In AI Information Overload
Your inbox is flooded with newsletters. Your feed is chaos. Somewhere in that noise are the insights that could transform your work—but who has time to find them?
The Deep View solves this. We read everything, analyze what matters, and deliver only the intelligence you need. No duplicate stories, no filler content, no wasted time. Just the essential AI developments that impact your industry, explained clearly and concisely.
Replace hours of scattered reading with five focused minutes. While others scramble to keep up, you'll stay ahead of developments that matter. 600,000+ professionals at top companies have already made this switch.
Interested in sponsoring our newsletter? Book an ad here.
GRAPHIC DESIGN
📸 Photoshop releases “Reference Image”

Reference Image tool
This week, Adobe released “Photoshop 27.3,” introducing new non-destructive adjustment layers, smarter Generative Fill capabilities, an improved Remove tool, and more.
These tools now work like any other adjustment layer: you can mask them, adjust opacity, change blend modes, and keep them fully editable in your PSD.
1) "Reference Image" in Generative Fill

You can now feed Photoshop a reference photo, and it will try to match the lighting, color, and structure when generating new content (useful for compositing work or keeping a series of images visually consistent).
2) "Clarity and Dehaze" Adjustment Layer

Clarity and Dehaze are now available as non-destructive adjustment layers, features that previously required opening Camera Raw or converting to a Smart Object.
Clarity: Adds punch to textures and midtone details without blowing out highlights or crushing shadows.
Dehaze: Cuts through atmospheric haze (or adds it if you reverse the slider).
Having these as adjustment layers means you can apply them selectively with masks, adjust opacity, and change blend modes.
3) "Grain" Adjustment Layer

Add and refine film grain non-destructively, removing the need for workarounds like noise layers.
4) Updated Firefly Models

Adobe's "Generative Fill" and "Generative Expand" now feature the latest Firefly Fill & Expand model for more accurate results.
Generative outputs now render at up to 2K resolution, so extended canvases and filled areas hold detail much better.
5) Updated "Remove Tool"

The “Remove” tool now does a cleaner job removing objects and people, with fewer obvious smears and repetitive patterns.
In most cases, you'll get a usable result without needing to follow up with Clone Stamp or Healing Brush.
OTHER STORIES
Everything else in creative news 🗞
Figma’s “Glass” effect is now generally available, with new updates that let you: add glass to any object, shape, text, or frame, design with non-uniform corners, use the “Splay property,” and apply variables to Glass properties.
Figma released new iPhone 17 device frames, making it easier to design and prototype for current devices: iPhone 17, iPhone 17 Pro, iPhone 17 Pro Max, and iPhone Air.
Figma now allows you to turn conversations, documents, and images into visual diagrams to break down complexity and visualize the bigger picture in Claude.
Figma for Google Workspace add-on now works with Google Chat, so you can stay on top of your Figma updates.
Adobe InDesign now allows you to convert vector artwork created in Adobe Illustrator into an InDesign Layout.
xAI released “Grok Imagine 1.0,” unlocking 10-second videos, 720p resolution, and dramatically better audio.
Framer now allows you to add and customize empty states for CMS collection lists directly in the Canvas.
Framer released the new “createManagedCollection API,” allowing plugin authors a cleaner way to create and manage CMS collections reliably across all modes, so that workflows keep moving without extra setup.
Blender Foundation announced that Netflix Animation Studios is joining the Blender Development Fund as a Corporate Patron.
Webflow released new Interactions with GSAP features, including a Spline action for animating 3D scenes and an ease visualizer to customize easing curves with visual controls and presets.
Webflow Enterprise sites have been migrated to a modern CMS architecture, unlocking higher content scale, richer data models, and increased design flexibility.
Webflow now lets you edit content within interactive elements such as navbars, dropdowns, tabs, and sliders.
Webflow now allows you to hide and show elements dynamically with conditional visibility and simplify prop setup with suggested props.
Webflow now allows you to preview how Marketers and Content editors experience the canvas if you’re a Designer or Admin.
Webflow released a Google Ads for Webflow app, now available in the Webflow Marketplace.
Gamma released “AI Animations,” allowing you to generate presentations with AI animations and prompt animations in any card.
Riverside’s “Co-Creator” now allows you to add specific instructions once for every asset.
Freepik released “Multiple Model Generation,” allowing you to test up to 4 models at once with the same prompt, settings, and compare them side-by-side.
Freepik added new AI tools to the Clip Editor: Motion Shake, Audio Isolation, and Video FX.
Invideo released “AI Motion Graphics” with Anthropic, allowing you to generate motion graphics from a single prompt.
Invideo released a motion graphics preset pack including 5 new presets: Map Outline, Map Callout, Glass Morphism, Instant Message, and Highlight Typography.
Jitter released “The Click,” a new set of templates built around buttons and micro-interactions.
Lottielab released “Identity,” a new template pack for animating and showcasing your brand.
Higgsfield AI added support for xAI’s “Grok Imagine.”
Replit launched a new LinkedIn certification that lets you verify your coding skills directly on LinkedIn, with interactive challenges that test real-world programming abilities.
Replit added Voice AI inside Replit (no API keys or setup required).
Replit now supports “Open in Replit” prefilled prompts, allowing you to create links that open Replit with a pre-written prompt, making it easier for others to get started building.
HeyGen now lets you create videos with HeyGen + Claude Code.
Bolt now allows you to drop Figma frames directly into existing Bolt projects.
Lovable is now 71% better at solving complex tasks, now with: intelligent planning before building, Google authentication in one prompt, automated testing, prompt queuing, and more.
Luma AI released “Ray 3.14,” now with native 1080p HD and 4x faster, and 3X cheaper.
LTX Studio released a new “Brush” tool.
VEED’s AI Video API is now live, allowing you to connect VEED with 1,000+ ads on n8n and build end-to-end AI video workflows.
Restream released a new channels page, now allowing you to sort & search your channels, see the streams they were enabled in, and check specifications for each platform.
LTX Studio has added fonts and logos as part of Elements in LTX.
v0 released “Folders” and “Projects,” introducing a more simplified organization structure.
Artlist added support for xAI’s “Grok Imagine.”
*Some links in this newsletter may be from sponsors or affiliates, which means we might earn a commission if you make a purchase.





