When you master the AI drawing tool, the most important thing is whether the designer can improve the efficiency through the implementation of AI tools in the work, this issue brings you a set of AI-aided design to accelerate the complete process of demand landing, step one mainly involves the early demand and communication to confirm the low-fidelity framework (the specific process is different for different companies, but the final result orientation is the same), step two and three main main explanations of the detailed step process and points of attention in the landing. This tutorial AI tool uses the Midjourney painting tool, let's take a look at the whole process from character to background generation!
The estimated time for this output is 15 days, compared with the previous use of AI tools to intervene, in the early stage and upstream docking and mid-term style formulation process has been greatly accelerated, I hope this process dismantling can effectively help you in the future needs of the rational use of AI tools to accelerate the implementation of demand.
Here is also a suggestion to send the style of reference with their own ideas to the upstream students of this demand, and at the same time, because there is a color direction for reference, upstream students can also put forward their corresponding suggestions and accelerate (ps.).You don't necessarily have to choose similar activities, you can extend the idea, such as the Dragon Boat Festival, various festivals, etc., and there are similarities in the details between the plates).
You can mark the weights and plates of the plates in this step, and then confirm whether there are any errors in the function and understanding with the students who need the superiors.
When using AI to produce the main image and auxiliary elements, you must first do a good job of the frame layer of the interface (except for the main visual area, you should simply draw it in other places), so that the style of the subsequent output has a general color sense and direction (after the framework layer is built, then use AI to produce the main vision, and when the main visual area is completed, add details to the button plate, etc., which is the icing on the cake).
At the beginning of the plan, we thought that it was a little boy in a tiger suit, at this time, we can use it as the main vocabulary to generate, don't be too rigid in the generation process, be sure to fine-tune the keywords many times, try a few more groups, and sometimes there will be surprising results.
ps.After several attempts to fine-tune the keywords, about twenty sets were generated, and one was favored).
Here's my hint for you:
chinese new year chinese tiger trailer, in the style of charming character illustrations, miki asai, toycore, olympus penf, octane render, playful figures, konica big mini
When generating backgrounds (scenes), it should be noted that AI-generated scenes will not have as simple elements as generating the main character's IP (because we can't determine how many elements are in a large scene, just like in SD, the larger the scene, the more elements), and most of them need to be refined in the later stage.
The specific approach varies from person to person, but the main goal is to optimize in a direction that is in line with the subject matter of the requirement.
Optimization of financial maps
After the above two steps are done, don't stack them directly in the interface, otherwise it will look very dull, the correct way is to draw the central perspective point in the picture first, and then put some elements to decorate according to this perspective point, which will make the picture more layered and deep.
Title drawing
After the main visual is drawn, we can draw the title, and there are generally two situations.
1.If the picture is relatively simple and the theme background is single, you can directly plain the word + projection to line out the font.
2.But like the current background is bright and the elements are complex and a little perspective, you can choose to make a title with a bottom back to draw, pay attention to the bottom back at this time should also pull out a certain sense of perspective, and keep it unified with the visual perspective trend, and then adjust the font after twisting.
Here is my hint for you: (no need to pad the map, if you say that the generation of more complex items, such as drums, gongs and the like, you can use a physical map to pad).
Fill in the subject you want to generate) 3D icon, cartoon, clay material, 3D rendering, smooth and shiny!cute,isometric, yellow and red,spot light, whitebackground,best detail, hd,3d rendering, high resolution,spot light,white background,best detai
After this is generated, in order to unify the light source, you can brush a layer of light and dark sides.
When the requirements are completed, an intra-group evaluation can be carried out first, and then communicated with the upstream again, if there is no problem, the development can be delivered for development work, and finally the design and interactive acceptance.
With the rapid development of AI technology, it is imperative that designers constantly update their skills and knowledge to adapt to this rapidly changing industry. However, in order to truly master AIGC technology and implement it in design work, we need to have a good aesthetic ability and skill base, on top of which we can quickly select the best solution in the face of massive AI output content and apply it to actual design.