top of page
Search

What do a Bengal cat and a “scary receipt” have in common? The same issue with evidence.

  • Writer: Tomasz Dyrda
    Tomasz Dyrda
  • Feb 25
  • 5 min read

Gazeta Wyborcza published an article about fraud schemes that are made easier by the widespread availability of generative AI (Vinted, Allegro czy Wolt zmagają się z wyłudzeniami przy użyciu sztucznej inteligencji). Thanks to widely accessible models, images can be altered and then used for fraud and scams. GW describes this in the context of online purchases and orders.

A similar problem (fraud enabled by AI) was described, among others, by the Financial Times in the context of generated fake receipts submitted by employees as the basis for expense reimbursements (Forging ahead: the challenge of AI expenses fakes).

Are businesses really helpless and at a complete disadvantage? Not necessarily. AI models are already so advanced that they generate images that are practically indistinguishable from real photos. Image analysis alone may not be enough to determine whether what we see is reality or an artificially generated image (so-called “synthetic media”).

This is precisely the area where using digital forensics methods (and, more broadly, investigative methods) combined with—nomen omen—AI and the examination of data from external sources can help fight fraudsters.

We can consider two different scenarios: in the first, we have to deal with a potentially fake photo (the cases described by GW); in the second, with “synthetic” receipts submitted by employees for reimbursement in order to obtain repayment of fictitious expenses (the cases described by FT).


Scenario 1: Fake photo — explore metadata

A photo taken with a phone or camera usually contains metadata—so-called “data about data.” In practice, this means that the image file includes additional information. This may include the date and time the photo was taken, the type of camera or device, sometimes photo parameters, the place where it was taken (so-called geolocation), and other details.

This information (metadata) can be read because it is a standard format used for image files. A sample (real) photo with metadata is shown below.

However, it is not only cameras and phones that save metadata. Generative AI models can also save metadata that allows us to check who created a given image (file) and when. Below is an example of a synthetic photo (very realistic).


Below are the metadata entries saved by the GenAI model.



If we check the metadata of a photo that we suspect may have been altered or generated by an AI model, and we see metadata like the examples above, then with high probability (bordering on certainty) we can assume it is a synthetic image rather than a real photograph.

Sometimes the absence of information is no less valuable than the information itself. Not all generative AI models save metadata. If metadata fields are empty, that is also a situation where it is worth gathering additional information rather than relying solely on the image.

It is also worth remembering one feature (not necessarily a good one) of metadata: it can be edited or deleted. We should treat metadata as a useful source of information, but not as an oracle deciding whether something is authentic or not.


Scenario 2: Synthetic receipts — there is more than metadata

In many organizations (especially in Western Europe, the U.S., etc.), employers rely on trust in their employees. Processes for verifying and approving business expenses are less common; more often, employers assume that employees act in good faith and do not cheat.

Why this introduction? Because when reimbursing expenses, employers rely on what they see—receipts and bills submitted by employees. This approach was understandable before the era of generative AI models, because falsifying low-value documents was difficult, time-consuming, often imperfect, and ultimately not worth the effort. GenAI changed this by making tools available for generating ultra-realistic images of non-existent (synthetic) documents.

A similar question as before: are employers in a losing position? Not entirely. No method provides 100% certainty, but combining several types of analyses makes it possible to identify highly suspicious cases.

At Deka Forensics, for the purposes of our projects, we developed a tool for checking receipts, invoices, and tickets. When designing this solution, we considered what we could use to automatically identify suspicious documents, also using AI.

We based it on the following characteristics that every document should meet (we designed it partly with the Polish market in mind):

  • (a) it should have metadata, and (b) the metadata should indicate that it originated from a photo-capturing device;

  • the data in the document (especially if these are receipts) should be consistent, and the sum of the individual line items should match the total value shown on the document;

  • the company indicated on the document should exist, and its NIP (Polish tax identification number) should appear on the “white list” of taxpayers;

  • the document should contain the company’s address, and a check in Google Maps should (a) confirm that the address exists, (b) show which company (or companies) are present at that address, and (c) whether one of those companies is the one listed on the receipt.

Passing the above tests obviously does not provide 100% certainty that the document is not forged or “synthetic,” but it significantly reduces the risk.

On the other hand, failing one or more of these tests suggests that the document should be flagged for closer review.

This approach to document analysis, where we use:

  • GenAI to read data from receipts, invoices, or tickets and verify internal consistency;


  • electronic file characteristics (photo metadata);

  • access to the taxpayer “white list” via the API provided by the Polish Ministry of Finance;


  • Google Maps information about locations and businesses;

makes it possible to automate analysis, examine documents, and flag suspicious expense claims practically regardless of scale—covering 100% of the population, whether we need to check 10 or 1,000 receipts and bills.

Is implementing such controls a sign of lack of trust in employees, as I mentioned at the beginning? Rather, it is a response to emerging risks related to new technologies that simply did not exist a few years ago.

***

The articles in GW and FT indicate that the problems resulting from GenAI capabilities are similar and occur in different countries and on different continents. Some of them can be addressed by using the same GenAI to fight the irregularities to which GenAI has contributed.

If we combine this with other investigative and verification methods that do not necessarily have anything to do with GenAI, but allow automated access to independent, reliable data, it is possible to build an approach and tools that are effective in detecting fraud.

Large companies and platforms can implement such solutions internally, integrating them into existing company processes. Smaller organizations can use external solutions similar to what we described in the article. Regardless of which solution is more appropriate for your needs, we invite you to contact Deka Forensics. We have practical experience implementing both models.

Note: The Bengal cat participated in the photo session voluntarily and received compensation for the use of its likeness in the form of a “lux treat” with lobster.

 

 
 
 

Recent Posts

See All

Comments


© 2026 Deka Forensics All rights reserved

Visit us: Facebook, LinkedIn

  • LinkedIn
  • White Facebook Icon
bottom of page