Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Google

Google Sets New AI Standards, OpenAI Gets Physical with Jony Ive, MIND of Pepe Ready for Takeover
NFT Gaming

Google Sets New AI Standards, OpenAI Gets Physical with Jony Ive, MIND of Pepe Ready for Takeover

by admin May 22, 2025


Trusted Editorial content, reviewed by leading industry experts and seasoned editors. Ad Disclosure

AI may finally be crossing into the world of physical devices, just as Google rolls out a jaw-dropping arsenal of advanced AI tools.

Amid the frenzy, Mind of Pepei is leading its own AI-powered uprising, entering the final stage or a presale that’s already pulled in a cool $9.8 million.

So, what exactly does this mean for the future of AI and crypto? Let’s get into it.

Google Raises Bar for Generative AI

It’s only been around for a few years, but generative AI is growing up at an accelerated rate. Google might have just super-charged that growth rate.

To date, AI has been able to deliver video, images, text, voiceovers, and more. But often, putting those together – into, say, a short company promotional video – requires using a suite of tools in unison.

Google drops that suite down to four core tools:

  • Gemini is Google’s all-in-one solution, with web search results, video generation, and more
  • Veo 3 delivers integrated video generation, able to produce videos, soundtracks, and more all in a single prompt
  • Flow is a Veo 3-based video editing tool, relying on the same tools for editing rather than producing videos
  • Imagen boosts image generation to the next level, the next iteration of photo production, prioritizing photo-realism but capable of integrating disparate elements

The full potential of the Gemini toolset is impressive, with fewer tools delivering a more integrated result for users.

But OpenAI, creator of the now-ubiquitous ChatGPT, has stepped up its own game.

OpenAI Turns to Apple’s Designer for AI Hardware

OpenAI might be looking to the next stage of AI’s lifecycle; bringing AI into the real world with AI-powered devices.

To help design the hardware, they turned to someone who helped design the last wave of world-hanging devices: Jony Ive, designer of the iPhone.

The $6.5B deal between Jony Ive and OpenAI has the potential to shape the move from ‘AI as a cool new internet trick’ to ‘AI is fundamental part of daily life.’

That’s what Apple did with the iPhone (and iPods, and iMac, and i-Everything else), and it’s what OpenAI wants to do with ChatGPT.

And just remember – it worked out pretty well so far for Apple.

Will Jony Ive work his magic again? OpenAI hopes so.

Over on the crypto front, MIND of Pepe is working to integrate two unlikely allies to break through on another frontier of AI’s development.

MIND of Pepe: AI Agent for Long-Awaited AI-Crypto Breakthrough

MIND of Pepe ($MIND) launches an AI-powered meme coin. Why?

Because a memecoin brings momentum, energy, and captures the crypto zeitgeist. Everyone loves the best meme coins, and they have huge potential for growth.

At the same time, AI agents can learn from social media, interact with the blockchain, and learn from the hivemind.

MIND of Pepe has already launched on X, learning more by the moment about crypto and the crypto market as a whole.

It’s all part of an aggressive and ambitious techmap.

  • Token deployment – the $MIND token is bearing down on $10M raised in its presale, with only __ days to go.
  • MIND AI agent – MIND of Pepe launched on X in April 2025
  • Market trend analysis – as MIND learns, it will analyze the memecoin and crypto markets
  • Holder-exclusive alpha – the more MIND learns, the more it value it will be able to give to token holders

The MIND of Pepe agent is fully autonomous. There’s no direct oversight of the agent, which means MIND is free to interact with the blockchain and with social media.

Will MIND of Pepe be able to succeed where other projects failed?

Harnessing the familiarity of crypto’s favorite green frog, MIND of Pepe can achieve widespread recognition and momentum from the famous memecoin, while adding the untapped potential of AI.

But if you’re just learning about MIND of Pepe now, don’t wait: there’s only 8 days left on the current presale, with tokens priced at $0.0037515.

Buy now, and you’ll be able to stake $MIND for the last few days of the presale, at an impressive 229% APY.

The upside is huge. With an AI token market already valued at $39B, there’s room for a crypto presale like MIND of Pepe to take off – and bring the rest of the sector with it.

Learn how to buy MIND of Pepe, and see why our $MIND price prediction shows the token could reach $0.00535 by the end of the year – a 42% increase.

Google, OpenAI, and Pepe: The Path Forward for AI Adoption?

Memecoins launched by MIND of Pepe, Google’s full Gemini suite to produce videos and images, and slick new AI devices from OpenAI. In that potential future, how far could $MIND go?

As always, make sure to do your own research before investing in $MIND of Pepe — this isn’t financial advice, just a heads-up.

That said, time is running out. With just over a week left in the presale, the window to get in early is closing fast. If you’re curious about the future of AI-powered crypto and meme innovation, now’s the moment to take a closer look.

Editorial Process for bitcoinist is centered on delivering thoroughly researched, accurate, and unbiased content. We uphold strict sourcing standards, and each page undergoes diligent review by our team of top technology experts and seasoned editors. This process ensures the integrity, relevance, and value of our content for our readers.



Source link

May 22, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google Doubles Down on AI: Veo 3, Imagen 4 and Gemini Diffusion Push Creative Boundaries
Crypto Trends

Google Doubles Down on AI: Veo 3, Imagen 4 and Gemini Diffusion Push Creative Boundaries

by admin May 21, 2025



Google I/O 2025 was never about subtlety. This year, the company abandoned incrementalism, delivering a cascade of generative AI upgrades that aim to redraw the map for search, video, and digital creativity.

The linchpin: Gemini, Google’s next-gen model family, is now powering everything from search results to video synthesis and high-resolution image creation—staking out new territory in a race increasingly defined by how fast, and how natively, AI can generate.

The showstopper is Veo 3, Google’s first AI video generator that creates not just visuals, but complete soundtracks—ambient noise, effects, even dialogue—synchronized directly with the footage. Text and image prompts go in, and fully-produced 4K video comes out.

This marks the first large-scale video model capable of generating audio and visuals simultaneously—a trend that began with Showrunner Alpha, an unreleased model, but Veo3 offers far more versatility, generating various styles beyond simple 2D cartoon animations.

“We’re entering a new era of creation with combined audio and video generation,” Google Labs VP Josh Woodward said during the launch. It’s a direct challenge to current video generation leaders—Kling, Hunyuan, Luma, Wan, and OpenAI’s Sora—positioning Veo as an all-in-one solution rather than requiring multiple tools.

Alongside Veo3, Imagen 4—Google’s latest iteration of its image generator model—arrives with enhanced photorealism, 2K resolution, and perhaps most importantly, text rendering that actually works for signage, products, and digital mockups.

For anyone who’s suffered through the gibberish text created by previous AI image models, Imagen 4 represents a significant improvement.

These tools don’t exist in isolation. Flow AI, a new subscription feature for professional users, combines Veo, Imagen, and Gemini’s language capabilities into a unified filmmaking and scene-editing environment. But this integration comes at a price—$125 per month to access the complete toolkit as part of a promotional period until the full $250 price starts to be charged.

Image: Google

Gemini: Powering search and “text diffusion”

Generative AI isn’t just for content creators. Gemini 2.5 now forms the backbone of the company’s redesigned search engine, which Google wants to evolve from a link aggregator into a dynamic, conversational interface that handles complex queries and delivers synthesized, multi-source answers.

AI overviews—where Google Gemini attempts to provide comprehensive answers to queries without requiring users to click through to other sites—now sit at the top of search pages, with Google reporting over 1.5 billion monthly users.

Image: Google via Youtube

Another interesting development is “Gemini Diffusion,” built with technology pioneered by Inception Labs months ago. Until recently, the AI community generally agreed that autoregressive technology worked best for text generation while diffusion technology excelled for images.

Autoregressive models generate each new token after reading all previous generations to determine the best next token—ideal for crafting coherent text responses by constantly reviewing the prompt and prior output.

Diffusion technology operates differently, starting with filling all the context with random information and refining (diffusing) the output each step to make the final product match the prompt—perfect for images with fixed canvases and aesthetics.

OpenAI first successfully applied autoregressive generation to image models, and now Google has become the first major company to apply diffusion generation to text. This means the model begins with nonsense and refines the entire output with each iteration, producing thousands of tokens per second while maintaining accuracy—for context, Groq (not xAI’s Grok), which is one of the fastest inference providers in the world, generates near 275 tokens per second, and traditional providers like OpenAI or Anthropic cannot come close to those speeds.

The model, however, isn’t publicly available yet—interested users must join a waiting list—but early adopters have shared impressive results showing the model’s speed and precision.

Hands-on with Google’s AI tools

We got our hands on several of Google’s new AI features, with mixed results depending on the tier.

Deep Research is particularly powerful—even beating ChatGPT’s alternative. This comprehensive research agent evaluates hundreds of sources and delivers reliable information with minimal errors.

What gives it an edge over OpenAI’s research agent is the ability to generate infographics. After producing a complete research text, it can condense that information into visually appealing slides. We fed the model everything about Google’s latest announcement, and it presented accurate information through charts, schemes, graphs, and mind maps.

Veo 3 remains exclusive to Gemini Ultra users, though some third-party providers like Freepik and Fal.ai already offer access via API. Flow isn’t available to try unless you spring for the Ultra plan.

Flow proves to be an intuitive video editor with Veo’s models at its core, allowing users to edit, cut, extend, and modify AI scenes using simple text prompts.



However, even Veo2 got a little love, which is making life easier for Pro users. Generations with the now-accessible Veo2 are significantly faster—we created 8 seconds of video in about 30 seconds. While Veo2 lacks sound and currently only supports text-to-video (with image-to-video coming soon), it understood our prompts and even generated coherent text.

Veo2 already performs comparably to Kling 2.0—widely considered the quality benchmark in the generative video industry. The new generations with Veo3 seem to be even more realistic, coherent, with good background sound and lifelike dialogue and voices.

For Imagen, it’s difficult to determine at first glance whether Google incorporates version 4 or still uses version 3 on its Gemini chatbot interface, though users can confirm this through Whisk. Our initial tests suggest Imagen 4 prioritizes realism unless specified otherwise, with better prompt adherence and visuals that surpass its predecessor.

We generated an image with different elements that don’t usually fit together in the same scene. Our prompt was “Photo of a woman with a skin made of glass, surrounded by thousands of glitter and ethereal pieces in a baroque room with the word ‘Decrypt’ written in neon, realistic.”

Even though both Imagen 3 and Imagen 4 understood the concept and the elements, Imagen 3 failed to capture the realistic style—which Imagen 4 easily did. Overall, Imagen 4 is comparable to the SOTA image generators, especially considering how easy it is to prompt.

Audio overviews have also improved, with models now easily providing over 20 minutes of full debates on Gemini instead of forcing users to switch to NotebookLM. This makes Gemini a more complete interface, reducing the fragmentation that previously required users to jump between different sites for various services.

The quality is comparable to that of NotebookLM, with slightly longer outputs on average. However, the key feature is not that the model is better, but that it is now embedded into Gemini’s chatbot UI.

Premium AI at a premium price

Google didn’t hide its monetization strategy. The company’s “Ultra” plan costs $250 monthly, bundling priority access to the most powerful models, Flow AI tools, and 30 terabytes of storage—clearly targeting filmmakers, serious creators, and businesses. The $20 “AI Pro” tier unlocks Google’s previous Veo2 model, along with image and productivity features for a broader user base. Basic generative tools—like simple Gemini Live and image creation—remain free, but with limitations like a token cap and only 10 researches per month.

This tiered approach mirrors the broader AI market trend: drive mass adoption with freebies, and then lock in the professionals with features too useful to pass up. Google’s bet is that the real action (and margin) is in high-end creative work and automated enterprise workflows—not just casual prompts and meme generation.

Edited by Andrew Hayward

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

May 21, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
Crypto Trends

Google Unveils Android XR Glasses with Gemini AI Integration

by admin May 21, 2025



In brief

  • At Google I/O, Google demos Android XR glasses with Gemini AI for translation, navigation, media, and real-time help.
  • The glasses are in beta testing and Google plans to release them with eyewear brands Gentle Monster, and Warby Parker.
  • Google is positioning Android XR to rival Meta’s AI glasses.

Google unveiled Android XR, a new extended reality platform designed to integrate its Gemini AI into wearable devices such as smart glasses and headsets.

During its 2025 I/O developer conference on Tuesday, the tech giant showcased the Android XR glasses, the company’s first eyewear set since the ill-fated Google Glass smart glasses in 2023.

During the presentation, Shahram Izadi, Vice President and General Manager at Android XR, highlighted the need for portability and quick access to information without relying on a phone.

“When you’re on the go, you’ll want lightweight glasses that can give you timely information without reaching for your phone,” he said. “We built Android XR together as one team with Samsung and optimized it for Snapdragon with Qualcomm.”

Google first announced Android XR in December 2024. The reveal arrived eight months after Meta released the latest version of its Ray-Ban Meta AI glasses—a sign of growing competition in the wearable AI space.

Glasses with Android XR are lightweight and designed for all-day wear. They work with your phone so you can be hands-free, stay in the moment with friends and complete your to-do list. pic.twitter.com/CLXGxeQPzs

— Google (@Google) May 20, 2025

Like Meta’s AI glasses, the Android XR glasses include a camera, microphones, and speakers and can connect to an Android device.

Google’s flagship AI, Gemini, provides real-time information, language translation, and an optional in-lens display that shows information when needed.

During the presentation, Google also showed off the Android XR glasses live streaming capabilities, as well as their ability to take photos, receive text messages, and display Google Maps.

Google also demonstrated how Gemini can complement exploration and navigation through immersive experiences.

“With Google Maps in XR, you can teleport anywhere in the world simply by asking Gemini to take you there,” Izadi said. “You can talk with your AI assistant about anything you see and have it pull up videos and websites about what you’re exploring.”

While Google did not announce a release date or price, Izadi said the glasses would be available through partnerships with South Korean eyewear brand Gentle Monster and U.S. brand Warby Parker, adding that a developer platform for Android XR is in development.

“We’re creating the software and reference hardware platform to enable the ecosystem to build great glasses alongside us,” Parker said. “Our glasses prototypes are already being used by trusted testers, and you’ll be able to start developing for glasses later this year.”

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

May 21, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
GameFi Guides

Google Launches SynthID Detector to Catch Cheaters in the Act

by admin May 21, 2025



In brief

  • Google’s SynthID embeds traceable marks in all of Google’s AI tools.
  • The tool flags AI-generated image content using invisible watermarks across media.
  • It also helps helps identify AI-made text, and video as concerns over cheating grows.

With deepfakes, misinformation, and AI-assisted cheating spreading online and in classrooms, Google DeepMind unveiled SynthID Detector on Tuesday. This new tool scans images, audio, video, and text for invisible watermarks embedded by Google’s growing suite of AI models.

Designed to work across multiple formats in one place, SynthID Detector aims to bring greater transparency by identifying AI-generated content created by Google’s AI, including the audio AIs NotebookLM, Lyria, and image generator Imagen, and highlighting the portions most likely to be watermarked.

“For text, SynthID looks at which words are going to be generated next, and changes the probability for suitable word choices that wouldn’t affect the overall text quality and utility,” Google said in a demo presentation.

“If a passage contains more instances of preferred word choices, SynthID will detect that it’s watermarked,” it added.

SynthID adjusts the probability scores of word choices during text generation, embedding an invisible watermark that doesn’t affect the meaning or readability of the output. This watermark can later be used to identify content produced by Google’s Gemini app or web tools.

Google first introduced SynthID watermarking in August 2023 as a tool to detect AI-generated images. With the launch of SynthID Detector, Google expanded this functionality to include audio, video, and text.

Currently, SynthID Detector is available in limited release and has a waitlist for journalists, educators, designers, and researchers to try out the program.

As generative AI tools become more widespread, educators are finding it increasingly difficult to determine whether a student’s work is original, even in assignments meant to reflect personal experiences.

Using AI to cheat

A recent report by New York Magazine highlighted this growing problem.

A technology ethics professor at Santa Clara University assigned a personal reflection essay, only to find that one student had used ChatGPT to complete it.

At the University of Arkansas at Little Rock, another professor discovered students relying on AI to write their course introduction essays and class goals.

Despite an increase in students using its AI model to cheat in class, OpenAI shut down its AI detection software in 2023, citing a low rate of accuracy.

“We recognize that identifying AI-written text has been an important point of discussion among educators, and equally important is recognizing the limits and impacts of AI-generated text classifiers in the classroom,” OpenAI said at the time.

Compounding the issue of AI cheating are new tools like Cluely, an application designed to bypass AI detection software. Developed by former Columbia University student Roy Lee, Cluely circumvents AI detection on the desktop level.

Promoted as a way to cheat on exams and interviews, Lee raised $5.3 million to build out the application.

“It blew up after I posted a video of myself using it during an Amazon interview,” Lee previously told Decrypt. “While using it, I realized the user experience was really interesting—no one had explored this idea of a translucent screen overlay that sees your screen, hears your audio, and acts like a player two for your computer.”

Despite the promise of tools like SynthID, many current AI detection methods remain unreliable.

In October, a test of the leading AI detectors by Decrypt found that only two of the four leading AI detectors, Grammarly, Quillbot, GPTZero, and ZeroGPT, could determine if humans or AI wrote the U.S. Declaration of Independence, respectively.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

May 21, 2025 0 comments
0 FacebookTwitterPinterestEmail
Everything Google Announced at I/O 2025
Product Reviews

Everything Google Announced at I/O 2025

by admin May 20, 2025


Some of these Gemini features will be coming to users of Android and Google’s web apps for free, but others (and the more powerful feature sets) will be available via paid subscription. Google’s $20 a month AI Premium service has been renamed to Google AI Pro, and the cost stays the same, though it now comes with more limited features. Google AI Ultra, the company’s full suite of AI services, has increased to $250 per month. That’s $50 per month more expensive than OpenAI’s similar full-suite plan, ChatGPT Pro

Gemini Is an Artist, Actually

Creative professionals and programmers take note: Google’s enhancements to its creative tools will either make your job easier and more productive, or it will render you obsolete.

Jules is an “asynchronous coding agent” that aims to let you take a rough design scribbled on a napkin and turn it into a full-fledged illustration or code, while showing you the work it did along the way.

There’s also a new version of Google’s AI image generator called Imagen 4 that Google claims can generate much more detail in images, like textures in AI-generated paintings or custom text in music posters.

Courtesy of Google

Courtesy of Google

Google also has some new generative AI video tools, like Flow, a tool made specifically for AI movie creation. It lets you upload photos or illustrations of characters, props, or scenery, then animate it all into a short movie using text prompts. If you don’t have photos, you can just type a generative prompt to make the visuals right inside Flow. Then you build a narrative video scene by scene by describing the action in a text box. The company illustrated Flow by showing a generated video of an old man using a giant chicken in the backseat to make his car fly. The video didn’t look that great, and weirdly plastic, but it got the point across.

Also included in the update is an enhanced video generator called Veo 3 that Google says has a better understanding of material physics for smoother and more lifelike animations.

Search Goes Full AI Mode

Last year at I/O, Google unleashed its AI Overviews enhancement to search results, a feature that summarizes results from across the web at the top of the screen for some queries. The results were famously varied, from being just plain busted to having hilarious hallucinations to showing actual plagiarism. Nevertheless, Google is now giving its search experience an even shinier AI sheen.

To that end, Google is making search much more chatbot-oriented with its new AI Mode. This search feature was first announced in March 2025 as an experiment, and now it’s available within the default Google search experience for everyone in the US. AI Mode appears in a tab within your search results, so you can switch over to it with a click if it’s available.



Source link

May 20, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google IO 2025 live keynote: all the latest on Gemini AI, Android 16 and more
Product Reviews

Google IO 2025 live keynote: all the latest on Gemini AI, Android 16 and more

by admin May 20, 2025



Welcome to our Google IO 2025 live blog, where we’re bringing you all the latest from the search giant’s opening keynote at the Shoreline Amphitheater in Mountain View, California.

Google is expected to speak about a whole host of products and services, with Gemini AI likely to be a major focus, with appearances from Android 16, WearOS 6 and Android XR all tipped to happen.

Google IO 2025 keynote live blog

LiveLast updated May 20, 2025 10:32 AM

The liveblog has ended.

No liveblog updates yet.

Load more






Source link

May 20, 2025 0 comments
0 FacebookTwitterPinterestEmail
How to watch the Google IO 2025 keynote
Product Reviews

How to watch the Google IO 2025 keynote

by admin May 20, 2025



Google IO 2025 takes place today, and you’ll be able to watch the keynote livestream right here, via the video player above.

While IO is predominantly a developer conference, the opening keynote of the two-day event always delivers exciting news on new products and services Google has been working on.

We’re expecting a bumper crop of announcements from the search giant, including more information on Android 16 – although Google revealed a lot last week in its Android Show – updates to its Gemini AI, and possibly new VR headsets / AR glasses alongside its dedicated Android XR platform.

We might also get updates on Android Auto and WearOS 6, but we wouldn’t be surprised if Google focuses heavily on AI.

When is the Google IO 2025 keynote?

The Google IO 2025 keynote starts at 10am PT / 1pm ET on May 20, 2025.

It’s being held in-person at Shoreline Amphitheater in Mountain View, California.

How can I watch the Google IO 2025 keynote?

You can watch the opening keynote via the video player at the top of this article, where Google says you’ll “learn the latest news, announcements, and AI updates.”

The keynote is also being livestreamed on YouTube and on the Google IO website.

The opening keynote is followed by Google’s developer keynote, which kicks off at 1.30pm PT / 4.30pm ET, which you’ll also be able to watch via the IO website.

How long is the Google IO 2025 keynote?

While Google hasn’t confirmed how long this year’s keynote will be, history suggests we’ll be in for a lengthy session.

The 2024 keynote ran for one hour, 52 minutes, while the 2023 keynote ran for two hours and five minutes. Basically, make sure you’ve got enough snacks to last two hours.

While you wait for the Google IO 2025 keynote to start, why not familiarise yourself with everything announced last year, at Google IO 2024.






Source link

May 20, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google releases its NotebookLM mobile app
Gaming Gear

Google releases its NotebookLM mobile app

by admin May 20, 2025


The app appears to offer similar functionality to the desktop version of NotebookLM, including the ability to upload sources of information that the app can summarize. It can also make AI-generated, podcast-like Audio Overviews. With the app, you can listen to those Audio Overviews in the background while you’re doing other things on your phone or while offline, Google says. Might be a handy way to get caught up on your performance review.

The company teased the launch of the mobile app for the AI-powered tool last month. Its official arrival is happening just ahead of Google I/O, which kicks off with the opening keynote at 1PM ET on Tuesday. It’s probably going to be an AI show.

Update, May 19th: The iOS app is now available, too.



Source link

May 20, 2025 0 comments
0 FacebookTwitterPinterestEmail
How to manage your bookmarks in Google Chrome
Gaming Gear

How to manage your bookmarks in Google Chrome

by admin May 19, 2025


There’s a lot to look at, watch, and listen to on the web. Fully utilizing the bookmarks feature in Google Chrome can be a real help in staying on top of everything.

Chrome is the browser I use most often, and I’ve got a huge number of bookmarked sites inside it: long reads I want to get back to once work is done, news updates to write up for work, gift ideas, apps I’d like to check out, important Slack channels, and content systems for my job… the list goes on. All synced between devices and available everywhere.

If you haven’t done a deep dive into Chrome’s bookmarks feature then you might not be aware of everything you can do with it, how it can save you time, and how you can bring some kind of order to your web browsing.

You can edit bookmarks as soon as you’ve saved them. Screenshot: Google

The star icon to the right of the address bar in Chrome on the desktop is for saving new bookmarks. Click it and the current page gets saved to the most recently used bookmarks folder. You can also press Ctrl+D (Windows) or Cmd+D (macOS), which is even easier. On mobile, tap the three dots at the top then the star icon (Android), or the three dots at the bottom then Add to bookmarks (iOS) to save the current page as a new bookmark.

As soon as a bookmark is saved, a small dialog pops up. You can use it to change the bookmark name and folder if you need to, or just click Done to move on.

More options can be found on the desktop by clicking the three dots (top right) then Bookmarks and lists. You’re able to add bookmarks from this menu, too, and there’s also a Bookmark all tabs option, which saves every open tab (which, like tab groups, is handy if you need to do something else but want to be able to get back to where you are).

On the same Bookmarks and lists menu you can toggle the bookmarks bar on and off. The bookmarks bar sits just below the address bar, and gives you another way to add bookmarks: drag the URL of the site you’re viewing down from the address bar to the bookmarks bar.

You can find your saved links in the bookmark manager. Screenshot: Google

Bookmarks become a lot more useful when they’re well organized. On the desktop you can launch the full bookmarks manager by clicking Bookmarks on the bookmarks bar, or by clicking the three dots (top right) and then Bookmarks and Lists > Bookmark Manager. You can search through your bookmarks here, and create new folders: Click the three dots (top right), then Add new folder — and yes, you can put folders inside other folders. (This is also where you’ll find the Export feature, and it’s a good idea to export your bookmarks occasionally as a backup.)

Folders are a key part of keeping your bookmarks organized, and they’re all accessible on the left of the bookmarks manager page. Click and drag bookmarks to change their order, or to move them into different folders. Right-click on a folder to rename or delete it, or drag it on top of another folder to put it inside that folder.

You can double-click any bookmark to open it. To edit it, click the three dots to the right: You’re able to rename your bookmarks and change their URLs, as well as put them in different folders. If you right-click on a bookmarks folder, you get the option to open all the links it contains in separate tabs — which may not do your computer much good if there are dozens or hundreds of bookmarks in the folder.

The bookmark manager can be accessed on mobile too, though you don’t get quite as many options to play around with. Tap the three dots at the top (Android) or bottom (iOS) of the interface, then choose Bookmarks to see the list. As long as you’re signed in with your Google account, bookmarks will sync between your devices.

Doing more with bookmarks

With some tweaks, you can get your bookmarks bar looking much more compact. Screenshot: Google

There are other ways to work with your bookmarks in Chrome.

  • Here’s a clever trick for your desktop bookmarks bar: right-click on each bookmark in turn, remove the text in the Name field completely each time, and click Save. You’ll then have a compact row of bookmark icons showing only the favicons for each site (such as a purple V for The Verge). It’s a neat way to put more bookmarks within each reach, and you can still tell what each one is.
  • Something else you might want to try is to search through your bookmarks directly from the address bar. You may see some bookmark results suggested underneath as you type out your search query, and you can also force a bookmarks search by preceding your query with “@bookmarks” and a space.
  • If you want to keep bookmarks visible at all times, that’s possible too. Click the three dots in Chrome for the desktop, then choose Bookmarks and lists > Show all bookmarks. A side panel appears displaying all your bookmarks: You can search through them, create new folders, and edit your bookmarks from here (click the three dots to the side).
  • The icons in the top right corner, from left to right, let you change the order of the bookmark lists, switch between compact and visual views, and edit multiple bookmarks at once. You can shut down the panel with the X button, but if you click the pin button just next to it before you do, you get a dedicated bookmarks panel icon that stays permanently available on the Chrome toolbar, and will let you open it back up again.

This has hopefully given you some idea of what you can do with bookmarks in Google Chrome. And there’s more, so it might be worth it to take some time and look around. It’s not the most sophisticated of systems, but it does give you plenty of flexibility in terms of how you can use it and what’s possible, so you can get your folders and shortcuts set up in the ways that work best for you.





Source link

May 19, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (96)
  • Esports (75)
  • Game Reviews (81)
  • Game Updates (86)
  • GameFi Guides (94)
  • Gaming Gear (94)
  • NFT Gaming (88)
  • Product Reviews (96)
  • Uncategorized (1)

Recent Posts

  • Nintendo Switch 2 will support regular USB mice in at least one game
  • Teenage Mutant Ninja Turtles: Tactical Takedown Review – Splitting Up The Family
  • Prices Head Lower as Trade War Heats Up
  • DGX B200 Blackwell node sets world record, breaking over 1,000 TPS/user
  • Dungeon RNG codes (May 2025)

Recent Posts

  • Nintendo Switch 2 will support regular USB mice in at least one game

    May 24, 2025
  • Teenage Mutant Ninja Turtles: Tactical Takedown Review – Splitting Up The Family

    May 24, 2025
  • Prices Head Lower as Trade War Heats Up

    May 24, 2025
  • DGX B200 Blackwell node sets world record, breaking over 1,000 TPS/user

    May 24, 2025
  • Dungeon RNG codes (May 2025)

    May 24, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • Nintendo Switch 2 will support regular USB mice in at least one game

    May 24, 2025
  • Teenage Mutant Ninja Turtles: Tactical Takedown Review – Splitting Up The Family

    May 24, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close