As digital surface area expands, human error remains a core cybersecurity risk

Maximizer President and Product & Engineering VP outline how wealth management firms can meet best practices and control for human failings

As digital surface area expands, human error remains a core cybersecurity risk
Mike Curliss, President, Maximizer

For all the ways technology can advance and cybersecurity services can grow more sophisticated, the core fact remains that cyber threats exploit deeply human vulnerabilities. Our preference for convenience, tendency to overlook certain details, and implicit trust are all easily exploited by scammers who are only growing more sophisticated as technology advances. That’s why one CRM provider is taking a combined technical and human approach to its cybersecurity strategy.

Maximizer is an industry CRM tool based out of Vancouver serving wealth managers around the world. Mike Curliss, President of Maximizer, spoke with WP about how his firm approaches cybersecurity risks in an age when the use of tech keeps growing and the risks grow with it. He outlined how his firm stays ahead of industry best practices and evolving threats, while highlighting the idea that one of the best ways to keep financial data safe is to make platforms more frictionless for their users.
“People can still be deceived through phishing attacks or expose sensitive information through simple actions like copying and pasting data between systems,” Curliss says of some of the human errors that lead to security breaches. “Institutions typically have controls in place, but when people operate outside those systems or controls, that’s when they can unintentionally expose themselves and their clients’ data to security risks.”

The choice, for example, to use publicly available Chat GPT to draft an email containing sensitive client data could have serious ramifications for that client’s data security. But the temptation to use a more convenient tool exists, meaning wealth tech providers and firms need to provide versions of these modern conveniences that can exist within their trusted systems. The digital “surface area” as Curliss puts it is only growing. Institutions and advisors are working with more vendors, more integrations, and more data than ever before. That creates vulnerabilities that can be exploited, especially in the wake of simple human error.

AI amplifies the risks now facing financial institutions. By lowering the barrier to entry for more sophisticated forms of attacks, as well as opening the door to more effective mimicry of people’s faces and voices. Even in the way it allows for easier digestion of data, generative AI tools are making scammers’ lives that much easier.

Adhering to regulation remains an important part of meeting that challenge. Curliss stresses his firm’s own adherence to SOC2 compliance regulations to ensure they’re meeting or exceeding industry standards for data controls. That also means constant updating of systems, auditing processes, testing defenses with third-party penetration checks, and constantly training people to adhere to these standards.

“People are under tremendous competitive pressure to perform at a very high level, be more efficient, and provide better quality of service,” adds Alexandre Ackermans, VP of Product & Engineering at Maximizer. “If you use ChatGPT, it's very easy to copy‑paste something from somewhere because you don't have an AI‑approved tool that is as convenient and then paste that in some other application. That's an example of where that data is going to move outside of the trusted system and into a system that is less controlled.”

Ackermans sees countless easy mistakes emerging from this impulse towards convenience and ease. Often they occur when a large institution hasn’t been able to roll out an equivalent tool with appropriate data controls. They can happen through small third-party apps that someone might put on their phone to read data from photos, or even through online sources where sensitive parts of the internal security process are shared for public consumption. Just as people within organizations need to be trained and coached to avoid these mistakes, Ackermans and Curliss also emphasize the importance of building secure systems that people would rather use.

Removing friction from existing systems, while maintaining security, is key to this approach. That means building a great UI and UX, as well as ensuring you have your own secure AI system to replace that impulse to just throw a question into public ChatGPT. Curliss notes that Maximizer has done exactly this with their own AI system.

Eliminating friction where possible, building strong data safeguards, following regulation, identifying threats, and training your people are all important steps in modern cybersecurity, but if those efforts are left to stagnate then weaknesses will inevitably emerge. Ackermans and Curliss stress the importance of constant improvement and vigilance given how fast this environment is evolving.

“If you're going to be operating in this environment, in this industry, you have to make sure that your practices and policies make that part of your core business. And I think if you lead with that, then everything else should follow,” Curliss says. “So there's a proliferation of tools and platforms and third‑party audits that you can bring to help you put those policies and practices in place, but you have to first make sure that you're addressing it from a strategic point of the organization. And then, of course, meeting and helping your clients and your advisors to make sure that you are putting the tools and the protections in place to keep and safeguard their business and their client data.”

LATEST NEWS