Last month, OpenAI launched GPT-4 with vision (GPT-4V), allowing the chatbot to read and respond to questions about images. One of the many ways AI users are using this new feature is to decode redacted government documents on UFO sightings. "ChatGPT-4V Multimodal decodes a redacted government document on a UFO sighting released by NASA," one tweet raves. "Maybe the truth isn't out there; it's right here in GPT-V." Decrypt reports: Trying to fill gaps in a string of text is basically what LLMs do. The user did the next best thing when trying to test GPT-V's capabilities and made it guess parts of a text that he censored. "Nearly 100% intent accuracy." he reported. Of course, it's hard to verify whether its guess at what's otherwise obscured is accurate -- it's not like we can ask the CIA how well it did peering through the black lines. Some other ways users are utilizing GPT-4V include: deciphering a doctor's handwriting; understanding medical images, such as X-rays, and receiving analysis and insights for specific medical cases; providing information about the nutritional content of meals or food items; assisting interior design enthusiasts by offering design suggestions based on personal preferences and images of living spaces; and proving technical analysis for stocks and cryptocurrencies based on screenshots.