Adobe revealed they are collaborating studies with scientists to create tools to detect edited images. There is no denying that the algorithm of Adobe has help users edit photos but still keep the natural contours and “truth” as captured. However, the benefits that it brings seems to be abused more and more by bad guys. The situation of fake videos with faces of celebrities (deepfake) is appearing more and more. This has caused Adobe to show concern and decided to bear part of the responsibility. However, new studies have outlined another superior technique. It is the reverse of these technique by AI right from the start to find clues about the original image. Liquify is a much used filter for beauty related image editing. This is a filter that can deform images with drag, rotate, thicken or thinner capabilities. Minor changes with Lyquify can also bring unexpected results. Based on that reversal, the team trained a virtual neural network based on data on faces before and after editing with Liquify. The feedback of algorithm’s results are very positive. Volunteers’ photo correction rate is only 53%, while AI algorithm guesses to be 99% accurate. This tool even suggests how to restore an image to its original state, unmodified, although the result is still not completely accurate. Although these tools are very useful, there is still a long way to go before the detection process catches up the creation of fake photos. Hany Farid, a professor of computer science from UC Berkeley, told the Washington Post that researchers’ efforts to detect deepfake videos are increasingly overwhelmed by the growing number of bad guys and technology. more sophisticated imitation.