New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Open source Tracee uses Linux eBPF technology to trace system and applications at runtime, and analyzes collected events to detect suspicious behavioral patterns.
The skills needed to implement and manage cloud services differ substantially from those needed for on-premises applications. This has created a major hurdle for businesses needing to innovate.
Customer-obsessed organizations should introduce API gateways alongside enterprise service buses to optimize service connectivity. Here’s how.
The vast metaverse will also be vast in terms of code, accelerating the demand for supply chain security, automated scanning and testing, and continuous updates.
Businesses often lack critical insights into the security of their cloud environment. Here are nine questions business leaders need to ask—and cloud security teams need to answer.
YugabyteDB 2.13 brings materialized views, local reads for performance, region-local backups, and much more, extending the geo-distribution capabilities of the database.
Recording the model development process on the blockchain can make that process more structured, transparent, and repeatable, resulting in less bias and more accountability.
Artificial general intelligence will be able to understand or learn any intellectual task that a human can. AGI will have high costs and huge risks, but it’s coming—maybe soon.
Today we’re seeing a major evolution in how search anticipates what users want before they know they are looking for it. Developers should be tuning in.
Cloud security is all about configuration. Here’s how to make sure the configurations of your cloud resources are correct and secure, and how to keep them that way.
Understand the two dimensions of scaling for database query and ingest workloads, and how sharding can make scaling elastic—or not.
Narrow AI applications such as Google Search and Amazon Alexa are great at solving specific problems, but only as long as you stick to the script.
Machine learning workloads require large datasets, while machine learning workflows require high data throughput. We can optimize the data pipeline to achieve both.
Choosing the wrong database for data-intensive applications opens a door to scaling challenges and unnecessary complexity. Making the right choice is simpler.
Flexera 2022 State of the Cloud survey finds data warehouses, databases, and containers to be the top cloud draws, with serverless and AI/ML rising fast.
Vector databases unlock the insights buried in complex data including documents, videos, images, audio files, workflows, and system-generated alerts. Here’s how.
PostgreSQL was the first relational DBMS to introduce JSON support, and its JSONB index search capability is unique. More is on the way.
A zero-day vulnerability in Argo CD could be putting sensitive information like passwords and API keys at risk. Are you protected?
How the 12-factor methodology, container-based microservices, and a monorepo approach win with both customers and developers at Priceline.
As IoT extends into every facet of our lives, the big challenge will be delivering data solutions that are interoperable with legacy, current, and future systems.
There’s a difference between technology adoption and vendor lock-in. Technology adoption has gravity, but vendor lock-in has teeth.
More organizations will tune into the far-reaching benefits of a symbiotic human-AI relationship in the coming year. Here’s how.
How Incorta’s unified data analytics platform closes the gap between strategic and operational decision-making.
In the coming year, organizations will seek to simplify, optimize, and consolidate observability through a mix of new tools and practices.
Four predictions for how technology innovation will allow competitive businesses to distinguish themselves in the coming year.
Strengthening the software supply chain must be priority No. 1 in the new year. Here are three areas to focus on.
The ability to reuse pre-built AI solutions and components, and customize them without coding, will finally allow AI solutions to be created without requiring scarce AI talent or costly IT resources.
In the aftermath of Log4Shell, generating software bills of materials and quickly accessing their information will be critical to addressing the new realities of software supply chain vulnerabilities and attacks.
The difficulties and challenges of running Kubernetes multiply as you scale. Here are four things we’ll need to manage multi-cluster orchestration.
The last five years have seen the rise of the cloud data warehouse. What will the next five years bring?
Focus on these engineering best practices to build high-quality models that can be governed effectively.
A bug in the ubiquitous Log4j library can allow an attacker to execute arbitrary code on any system that uses Log4j to write logs. Does yours?
Open source Trivy plugs into the software build process and scans container images and infrastructure-as-code files for vulnerabilities and misconfigurations.
Moving data science into production has quite a few similarities to deploying an application. But there are key differences you shouldn’t overlook.
Kylin was built to query massive relational tables with sub-second response times. A new, fully distributed query engine steps up performance of both cubing and queries.
Developers quickly understood the value of containers for building cloud-native applications, and that the Docker command-line tool was better than all of the bells and whistles they got with PaaS.
Determining the performance metrics that really matter for your application can make life a lot easier for your team and express your standards clearly across the business.
By using the RED metrics—rate, error, and duration—you can get a solid understanding of how your services are performing for end-users.
When performance issues arise, checking the USE metrics—utilization, saturation, and errors—can help you identify system bottlenecks.
Data science toil saps agility and prevents organizations from scaling data science efforts efficiently and sustainably. Here’s how to avoid it.
Legacy networking approaches don’t align with the way that cloud providers create services or access and only introduce more complexity. Move to the cloud, but leave your traditional networking behind.
Like Kubernetes itself, the underlying object storage should be distributed, decoupled, declarative, and immutable.
An overview of the strengths and weaknesses of today’s cloud database management systems.
For any company exploring the potential of the cloud and Kubernetes, adopting infrastructure as code, security as code, and automation will be essential.
Empowering cloud teams with automated policy-as-code guardrails helps them move faster and more securely.
Much of the software we use today is built on re-implemented APIs, like the Java API in question in Oracle v. Google. An Oracle victory would have stopped open-source innovation in its tracks.
Today, companies across every industry are deploying millions of machine learning models across multiple lines of business. Soon every enterprise will take part.
SLAs are for lawyers. Service level objectives not only introduce finer-grained reliability metrics, but also put that telemetry data into the context of user happiness.
Shipping software has always been about balancing speed and quality control. Many great technology companies built their empires by mastering this skill.
Algorithmic biases that lead to unfair or arbitrary outcomes take many forms. But we also have many strategies and techniques to combat them.