The codeless revolution promised to save us all, but the dream didn't quite pan out. AI is finally closing the gap that pure codeless tools never could. It’s about giving real humans superpowers so we can stop wasting our lives on repetitive grunt work.The codeless revolution promised to save us all, but the dream didn't quite pan out. AI is finally closing the gap that pure codeless tools never could. It’s about giving real humans superpowers so we can stop wasting our lives on repetitive grunt work.

From Codeless to AI-Powered: The Next Evolution of Test Automation

2025/11/28 12:44

Remember when “codeless” test automation first showed up? The pitch was irresistible: no coding, just record your actions, hit play, and boom, your tests run themselves forever. Testing for everyone, finally!

Fast-forward to today, and… yeah, most of us are still buried in script maintenance, chasing flaky locators that break every sprint, and praying the tests pass in CI so we can ship on time. The dream didn’t quite pan out.

But something actually different is happening right now in 2025, and it’s not just another marketing buzzword refresh. AI is finally closing the gap that pure codeless tools never could.

And the best part? It’s about giving real humans superpowers so we can stop wasting our lives on repetitive grunt work and get back to the stuff that actually matters, thinking critically, exploring edge cases creatively, and fighting for quality.

Let’s take a quick, honest look at how we got here. \n

Where It All Started: The Manual Testing Nightmare

Before automation was a thing, testing was the ultimate bottleneck. Devs could crank out features like crazy, but QA was stuck in the stone age: open a massive spreadsheet, click through the same flows over and over, document everything by hand… sprint after sprint.

It sucked for pretty obvious reasons:

  • It wasted time. A decent regression run could take days, or weeks if you were thorough. Releases turned into all-night death marches.
  • Humans make mistakes. No matter how careful you were, you’d miss a step, fat-finger something, or test with yesterday’s data.
  • Scaling nightmare. Want to cover Chrome, Firefox, Edge, Safari, plus iOS and Android? Cool, just multiply your effort by 50–150x.
  • Technical debt piled up fast. Apps got more complicated, test cases got out of date, and eventually the only person who knew how anything worked was “that one senior tester who’s on vacation this week.”

We all looked at that mess and said, “There has to be a better way.” So the industry charged head-first into automation… and that’s when the codeless revolution promised to save us all.

The Codeless Revolution: Great Promise, Mixed Results

The Vision

Around 2015-2018, codeless test automation emerged as the democratizing force testing teams desperately needed. The pitch was compelling: empower manual testers to create automation without learning to code. Record your actions, and the tool generates the test. No programming degree required.

Tools like Katalon Studio, Ranorex, and TestComplete gained rapid adoption by offering:

  • Visual test builders with drag-and-drop interfaces
  • Record-and-playback functionality that captured user actions
  • Keyword-driven testing that abstracted technical complexity
  • Lower barriers to entry for non-technical team members

Early success stories were encouraging. Teams that had never attempted automation were suddenly building test suites. Test creation accelerated dramatically, industry practitioners reported that tests requiring 45-60 minutes of hand-coding could often be recorded in under 5 minutes.

The Reality Check

But as codeless adoption scaled, limitations became impossible to ignore.

The brittleness problem emerged first. Tests that worked perfectly on Monday would mysteriously fail on Tuesday, not because the application broke, but because a developer changed a button's CSS class or moved an element 10 pixels. Industry research suggests teams commonly spend significant portions of their automation effort on test maintenance rather than creating new tests.

Dynamic applications exposed gaps. Modern web applications with single-page architectures, asynchronous loading, and dynamic content generation broke the simple record-playback model. Tests would fail because elements weren't ready, or succeed for the wrong reasons when timing accidentally aligned.

Complexity hit walls. Try implementing conditional logic, complex data validation, or sophisticated test orchestration in a purely codeless environment. You'd quickly find yourself either adding code anyway or building workarounds so convoluted they defeated the original purpose.

False positives eroded trust. The most insidious problem wasn't test failures, it was tests that passed when they shouldn't. A test that doesn't actually validate functionality is worse than no test at all, creating false confidence that leads to production bugs.

A 2024 PractiTest survey revealed that while 30% of teams had automated about 50% of their testing effort, only 2% had completely replaced manual testing. The gap between aspiration and reality remained stubbornly wide.

The Persistent Value

Despite these challenges, codeless testing proved its worth in specific contexts. It successfully:

  • Lowered the technical barrier for QA teams to begin automation
  • Accelerated initial test suite development
  • Enabled faster feedback loops than manual testing alone
  • Created reusable test components and libraries

The problem wasn't that codeless testing failed, it was that it couldn't go far enough. It solved the creation problem but struggled with maintenance, adaptability, and intelligence. The industry needed something more.

Enter AI: The Missing Intelligence Layer

This is where 2025 becomes genuinely different from any previous automation era. Artificial intelligence isn't just another feature checkbox, it's a fundamental reimagining of how test automation works.

What Makes AI-Powered Testing Different

Self-healing represents a paradigm shift. Instead of breaking when a developer changes id="submit-button" to id="submit-btn", AI-powered tests understand context. They analyze multiple attributes, visual appearance, position, surrounding text, function, semantic meaning, and automatically adapt to changes. Machine learning algorithms learn from successful test runs and predict the most reliable element identifiers.

The result? According to Gartner's research on AI in software testing, AI-driven automation and self-healing test scripts are becoming standard across the industry, with predictions that by 2025-2027, over 80% of test automation frameworks will incorporate these capabilities.

Intelligent test generation goes beyond recording. Modern AI doesn't just capture what you clicked, it understands what you're trying to test. Tools like Katalon's StudioAssist can take natural language descriptions like "verify a user can complete checkout with a discount code" and generate comprehensive test cases that cover happy paths, error conditions, and edge cases.

Even more powerful, AI can analyze your application's behavior patterns, user flows, and code changes to automatically suggest new test cases you haven't even thought of yet.

Smart maintenance becomes proactive, not reactive. AI-powered test platforms analyze failure patterns across thousands of test runs. They distinguish between real application bugs, environmental issues, and test script problems. They identify flaky tests before they erode team confidence and suggest optimizations to improve suite reliability.

When a test fails, AI provides intelligent root cause analysis, showing exactly what changed, which commit likely caused it, and which similar tests might be affected.

Natural language processing democratizes advanced testing. Forget learning XPath, CSS selectors, or programming syntax. Modern AI testing platforms let you write tests in plain English: "Click the checkout button," "Verify the total equals $99.99," "Fill in the email field with test@example.com." The AI handles all the technical translation.

The Technology Stack Behind the Intelligence

This isn't magic, it's sophisticated application of proven AI technologies:

Machine learning algorithms analyze historical test execution data to predict which tests are most likely to catch bugs, optimize test selection for CI/CD pipelines, and identify redundant test coverage.

Computer vision enables visual testing that understands layouts, designs, and user interfaces the way humans do, catching visual regressions that code-based assertions would miss entirely.

Natural language processing bridges the gap between business requirements and technical test implementation, parsing user stories and requirements documents to generate test scenarios automatically.

Predictive analytics forecast where bugs are most likely to occur based on code complexity, change frequency, and historical defect patterns, directing testing effort where it matters most.

Evolution in Action: Capability Comparison

Let's get concrete about what's actually different across the three generations of testing:

| Dimension | Manual Testing | Codeless Automation | AI-Powered Automation | |----|----|----|----| | Test Creation Speed | Slowest (hours per test) | Fast (minutes per test) | Fastest + Intelligent (seconds + auto-generation) | | Initial Learning Curve | Low | Low-Medium | Minimal (natural language) | | Maintenance Burden | N/A (recreate each time) | Medium-High | Low (self-healing) | | Handling UI Changes | Manual rework | Manual test updates | Automatic adaptation | | Complex Scenario Support | Limited by tester time | Limited by tool flexibility | Advanced (AI understands context) | | Flaky Test Management | N/A | Manual investigation | Automatic detection & correction | | Coverage Optimization | Manual prioritization | Manual test selection | AI-driven risk-based selection | | Root Cause Analysis | Manual debugging | Log review | Intelligent pattern analysis | | Test Data Management | Manual creation | Some generation | Smart synthetic data creation | | Cross-browser Consistency | High manual effort | Automated but brittle | Intelligent element handling |

The key insight: AI doesn't just make things faster, it makes them smarter. That's the fundamental difference.

Real-World Impact: Where AI Delivers Tangible Value

Theory is interesting. Results are what matter. Here's where AI-powered testing is delivering measurable impact today:

Self-Healing Tests: Maintenance That (Mostly) Handles Itself

Consider a typical scenario: Your development team implements a design refresh, changing class names, restructuring the DOM, and updating CSS. In traditional automation, this triggers a cascade of test failures, not because functionality broke, but because locators broke.

With AI-powered self-healing:

  1. The test runs and encounters a changed element
  2. AI analyzes multiple attributes (text content, position, function, visual appearance)
  3. System automatically identifies the correct element using alternative locators
  4. Test continues executing successfully
  5. Platform logs the change and suggests updating the stored locator

Organizations implementing AI-powered self-healing capabilities report significant reductions in maintenance overhead. One Katalon enterprise customer documented a 50% reduction in regression testing timeline while simultaneously increasing test coverage by 60%.

Intelligent Test Generation: Coverage You Didn't Know You Needed

AI doesn't just execute tests, it thinks about testing strategy. Modern platforms analyze:

  • User behavior patterns from production analytics to identify critical user journeys
  • Code complexity metrics to determine high-risk areas needing additional coverage
  • Historical defect data to understand where bugs typically hide
  • Application changes to automatically generate tests for new or modified features

Root Cause Analysis: From Hours to Minutes

When tests fail at 2 AM in your CI/CD pipeline, every minute counts. Traditional approaches meant:

  1. Reviewing logs across multiple systems
  2. Attempting to reproduce locally
  3. Analyzing screenshots and error messages
  4. Investigating recent code changes
  5. Determining if it's a real bug or test issue

AI-powered platforms compress this process through:

  1. Automatic failure pattern recognition
  2. Correlation with recent deployments and code changes
  3. Visual diff analysis showing exactly what changed
  4. Historical failure pattern comparison
  5. Probable root cause identification with confidence scores

Development teams leveraging AI-assisted debugging capabilities report substantially faster issue resolution times compared to traditional manual investigation approaches.

Test Optimization: Doing More with Less

Most test suites accumulate cruft over time, redundant tests, low-value tests, and tests that no longer align with product priorities. AI brings data-driven optimization:

  • Redundancy detection identifies tests covering identical functionality
  • Risk-based prioritization runs high-value tests first in CI/CD pipelines
  • Parallel execution optimization intelligently distributes tests across resources
  • Maintenance cost analysis flags tests requiring disproportionate maintenance effort

Organizations implementing AI-driven test suite optimization commonly report dramatic reductions in regression suite execution time while maintaining comprehensive coverage of critical application paths.

The Hybrid Approach: Combining Human Intelligence with AI Power

Here's a crucial insight that gets lost in vendor marketing: AI-powered testing isn't about replacing codeless or scripted approaches, it's about enhancing them.

The most successful teams in 2025 use a spectrum of automation strategies based on context:

Pure no-code for straightforward regression tests on stable application areas. Quick to create, easy to understand, perfect for QA team members who want to contribute without coding.

Low-code with AI assistance for the majority of test scenarios. Natural language combined with visual building, backed by AI-powered maintenance and optimization. This is the sweet spot for most modern testing.

Full-code with AI augmentation for complex test scenarios, custom integrations, and sophisticated test infrastructure. AI assists with code generation, review, and maintenance suggestions, but developers retain full control.

AI-generated tests for exploratory coverage, edge case identification, and areas where AI can identify gaps humans might miss.

The platform that enables this flexibility, moving seamlessly between approaches based on need, wins. Katalon's hybrid model lets teams choose their approach per test case, per team member, per project phase.

Getting Started: Your AI Testing Roadmap

Ready to move beyond pure codeless into AI-augmented testing? Here's your practical implementation guide:

Phase 1: Assessment (Week 1-2)

Evaluate your current state:

  • What percentage of testing is automated today?
  • How much time goes to test maintenance vs. new test creation?
  • Where do tests break most frequently?
  • Which test suites cause the most frustration?

Identify AI-ready opportunities:

  • High-maintenance test suites (prime candidates for self-healing)
  • Areas with poor test coverage (where AI generation adds value)
  • Flaky tests eroding team confidence (AI can stabilize these)
  • Time-consuming test creation processes (AI accelerates these)

Check team readiness:

  • Current tool proficiency
  • Openness to new approaches
  • Time available for learning and transition
  • Executive support for experimentation

Phase 2: Pilot Implementation (Week 3-8)

Start small and strategic:

Choose 1-2 high-impact test suites for initial AI augmentation. Ideal candidates:

  • Medium complexity (not trivial, not overwhelmingly complex)
  • High maintenance burden (you'll see ROI quickly)
  • Good test data availability
  • Engaged product owner who cares about results

Implement incrementally:

Week 3-4: Enable AI-powered self-healing on existing tests. Katalon Studio's smart locator capabilities work on tests you've already built.

Week 5-6: Use AI-assisted test generation for new features. Try StudioAssist's natural language capabilities for new test case creation.

Week 7-8: Analyze results, measure impact, refine approach. Document time savings, failure reduction, coverage improvements.

Phase 3: Scale and Optimize (Week 9-16)

Expand successful patterns:

  • Roll out to additional test suites based on pilot learnings
  • Train broader team on AI-augmented workflows
  • Establish best practices and guidelines
  • Integrate with CI/CD pipelines

Measure and communicate:

  • Test maintenance time reduction
  • False failure rate improvement
  • New test creation velocity
  • Defect detection improvement
  • Team satisfaction and confidence

Optimize continuously:

  • Refine AI models with your application's patterns
  • Update test generation templates based on your domain
  • Adjust self-healing confidence thresholds
  • Expand coverage systematically

Common Pitfalls to Avoid

Over-trusting AI without verification. AI is powerful but not infallible. Review AI-generated tests, validate self-healing decisions, and maintain human oversight of critical test scenarios.

Neglecting test data quality. AI is only as good as the data it learns from. Invest in quality test data, realistic test environments, and proper data management.

Skipping team training. AI tools still require understanding. Teams need to learn how to work effectively with AI assistance, interpret AI insights, and override AI decisions when appropriate.

Expecting instant perfection. AI improves over time as it learns your application's patterns. Early results will be good; results after 3-6 months will be excellent.

Vendor lock-in concerns. Choose platforms with open standards, API access, and data export capabilities. Katalon supports integration with industry-standard tools and frameworks.

Making the Shift: Key Takeaways

As we trace the evolution from manual testing through codeless automation to today's AI-powered platforms, several truths emerge:

Each evolution solved real problems, and created new ones. Manual testing was thorough but slow. Codeless automation was fast but brittle. AI-powered testing is intelligent but requires thoughtful implementation.

The goal isn't replacing humans, it's elevating them. The best testing teams in 2025 aren't the ones with the most AI; they're the ones using AI to free skilled testers from repetitive work so they can focus on exploratory testing, risk analysis, test strategy, and quality advocacy.

Hybrid approaches win. Pure no-code, pure AI, and pure scripting all have their place. Platforms that enable seamless movement between approaches based on context deliver the best results.

Implementation matters as much as technology. The fanciest AI features won't help if your team doesn't understand them, trust them, or use them. Successful adoption requires training, piloting, measuring, and iterating.

Start now, but start smart. The gap between teams leveraging AI-powered testing and those stuck in pure codeless or manual approaches is widening rapidly. But rushing in without strategy creates new problems. Assess, pilot, learn, scale.

Your Next Steps

The evolution from codeless to AI-powered testing isn't coming, it's here. The question is whether you'll be early to embrace these capabilities or spend years catching up.

Immediate actions to take:

  1. Assess your current testing maturity. Where are you spending the most time? Where are tests failing most frequently? What's your current maintenance-to-creation ratio?
  2. Identify one high-impact pilot opportunity. Don't try to transform everything at once. Find one test suite where AI-powered capabilities would deliver clear, measurable value.
  3. Explore AI-augmented platforms. Download Katalon Studio to experience AI-assisted test creation, self-healing tests, and intelligent maintenance firsthand. See how StudioAssist turns natural language into test cases in seconds.
  4. Measure everything. Establish baseline metrics now, test creation time, maintenance burden, failure rates, coverage gaps, so you can quantify improvement.
  5. Invest in team learning. AI testing requires new skills and mindsets. Dedicate time to training, experimentation, and building confidence with AI-augmented workflows.

The testing landscape has fundamentally changed. Teams that adapt to this new reality, combining human expertise with AI power, will deliver higher quality software, faster, with fewer resources and less stress.

Those that don't will find themselves increasingly outpaced by competitors who have.

The choice, as always, is yours. But the window for early advantage is narrowing.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
BitMine koopt $44 miljoen aan ETH

BitMine koopt $44 miljoen aan ETH

De grootste Ethereum (ETH) treasury ter wereld, BitMine Immersion Technologies, heeft weer toegeslagen op de crypto markt. Uit on-chain data blijkt dat BitMine, ook bekend onder het ticker symbool BMNR, voor $44 miljoen aan ETH munten heeft gekocht. Wat betekent dit voor de grootste altcoin? Check onze Discord Connect met "like-minded" crypto enthousiastelingen Leer gratis de basis van Bitcoin & trading - stap voor stap, zonder voorkennis. Krijg duidelijke uitleg & charts van ervaren analisten. Sluit je aan bij een community die samen groeit. Nu naar Discord BitMine verdubbelt inzet op Ethereum Om precies te zijn koopt BitMine 14.618 ETH munten erbij, goed voor dus $44 miljoen. Zo blijkt uit on-chain gegevens gedeeld door Lookonchain op X. Daarmee tilt de grote Ethereum treasury zijn voorraad naar maar liefst 3,63 miljoen ETH ter waarde van ruim $11 miljard, aldus data van StrategicETHReserve. Daarmee controleert het bedrijf nu 3% van alle Ethereum in omloop. Tom Lee(@fundstrat)’s #Bitmine just bought another 14,618 $ETH($44.34M) 4 hours ago.https://t.co/P684j5Yil8 pic.twitter.com/LHOpDto1R5 — Lookonchain (@lookonchain) November 28, 2025 De ambities liggen desondanks een stuk hoger: BitMine wil uiteindelijk 5% van de volledige ETH voorraad bezitten. Oftewel, we kunnen nog flink wat Ethereum aankopen verwachten van het bedrijf in de komende maanden. Door de aggresssieve ETH strategie van het bedrijf zijn ze bij uitstek de grootste Ethereum reserve. De nummer twee, SharpLink Gaming, bezit ongeveer 859.400 ETH munten ter waarde van zo’n $2,62 miljard. Deze agressieve uitbreiding volgt een duidelijke strategie. BitMine verwacht dat Ethereum een grotere rol in de tokenisatie. Bedrijven bezitten samen al bijna 5,01% van alle ETH, een signaal dat corporates zich voorbereiden op een toekomst waarin Ethereum een basislaag wordt voor financiële infrastructuur. Waarom BitMine zijn treasury blijft uitbreiden BitMine bouwt zijn treasury verder uit omdat het een dominante positie in het Ethereum netwerk wil innemen. Meer ETH geeft BitMine straks hogere staking-opbrengsten en meer invloed op de liquiditeit binnen het netwerk. Ook gelooft BMNR sterk in de rol van Ethereum in de toekomst van financiële infrastructuur. Bestuurslid Tom Lee verwacht dat ETH een dominante speler zal zijn in de stablecoin en tokenisatie markt. Beide sectoren zijn hard aan het groeien, mede dankzij duidelijke wet- en regelgeving onder de Trump administratie zoals de GENIUS Act. Daarnaast gelooft Tom Lee in een zogeheten supercycle voor ETH. Volgens de bekende top analist kan de grootste altcoin zelfs Bitcoin (BTC) voorbijstreven, allemaal dankzij grootschalige adoptie door tokenisatie. Als Ethereum de huidige marketcap van BTC wil evenaren dan zou de ETH koers al op ruim $15.000 komen. ETH en BMNR krabbelen langzaam op uit diepe dip De ethereum prijs reageerde vandaag beperkt op het nieuws. De altcoin steeg over de afgelopen 24 uur met 0,8% tot een huidige koers van $3.050. Daarmee zet de munt samen met de rest van de crypto markt een stijgende trend voort. Na een heftige crash in de afgelopen weken zakte de ETH koers vorige week vrijdag tot onder de $2.700. Ook het BMNR aandeel is langzaam aan het terugkrabbelen. Het ETH treasury bedrijf zakte vorige week tot $26. Een flinke crash ten opzichte van de all time high van $135 dat het bedrijf in juli van dit jaar nog wist te realiseren. De sterke daling van het BMNR aandeel valt samen met een algehele neerwaartse trend onder crypto treasury bedrijven. Ook Strategy, de grootste publieke Bitcoin houder, is ook flink lager aan het handelen vanaf zijn all time. Zo staat het MSTR aandeel momenteel op $175 tegenover een prijs record van $457 in juli. Ethereum (ETH) kopen op Bitvavo Bitvavo - grootste crypto exchange in Nederland Meer dan 340 beschikbare cryptocurrencies Lage transactiekosten Gemakkelijk via iDeal geld storten Professionele traders dashboard Bitvavo review Koop ETH op Bitvavo Let op: cryptocurrency is een zeer volatiele en ongereguleerde investering. Doe je eigen onderzoek. Het bericht BitMine koopt $44 miljoen aan ETH is geschreven door Thomas van Welsenes en verscheen als eerst op Bitcoinmagazine.nl.
Share
Coinstats2025/11/28 20:31
Upbit hack sparks altcoin season in Korea? Thailand targets WLD

Upbit hack sparks altcoin season in Korea? Thailand targets WLD

The post Upbit hack sparks altcoin season in Korea? Thailand targets WLD appeared on BitcoinEthereumNews.com. Korean crypto bros are pumping altcoins after Upbit’s $36M exploit Korean crypto traders are having an outsize effect on local altcoin prices following a major hack at South Korean exchange Upbit, according to CryptoQuant CEO Ki Young Ju. (Ki Young Ju) “Upbit got hacked and paused withdrawals, but Koreans are pumping alts since arbitrage bots are no longer running,” Ju said in an X post on Thursday, shortly after the exchange halted transaction activity after detecting an “abnormal transaction” with a value of around $36 million. With arbitrage activity suspended, local buy orders are having more significant pressure on prices, allowing Korean-listed altcoins to surge, as the selling pressure that typically puts a ceiling on price increases has disappeared. Crypto trader R2D2 said, “Unbelievable scenes here.” Crypto analyst A79 said, “Hack happens, and Koreans just flip it into a rally.” Upbit announced on Thursday that it had suspended deposits and withdrawals after identifying an unauthorized transaction worth approximately 54 billion won ($36 million), involving mainly Solana-based assets that were transferred to an unidentified wallet address. Assets reportedly affected by the hack include BONK (BONK), Official Trump (TRUMP), MOODENG (MOODENG), and Render (RENDER). Upbit to cover loss to prevent “any damage” to user assets The exchange clarified that while the hot wallet was impacted, its cold wallets — where the majority of user funds are stored — were not compromised. Dunamu CEO Oh Kyung-seok said: “We immediately identified the extent of the digital asset outflow caused by the abnormal withdrawals and will cover the entire amount with Upbit assets to prevent any damage to our members’ assets.” Some industry participants were confused by the fact that all the red numbers Ju shared were positive. StarkWare ecosystem lead Brother Odin was quick to ask the obvious question, before Ju explained that red…
Share
BitcoinEthereumNews2025/11/28 21:20