I am drawing some images and need to save the file to use it later but when using the command sendxy=1 it just send out the first touch point and not the all line points.
Is it already available and i need to do some extra code or is a develop possibility?
As well as sendxy just sending the first touch points cords "Pressed" report, it also sends the original first touch point cords "Released" report (so a press can be matched to a release to say a touch is done)
First, I have not seen any functions capable of capturing a screen drawing.
Next, If you could somehow hook the same function drawing the pixel to interrupt to send that coord to serial, dataloss occurs quickly if an interrupt is lost to many pixels being draw/recorded back to back.
Next, to capture a drawing through sendxy coords in real-time would push about 18 bytes to the serial for each/every pixel pressed/released. It wouldn't take long before data loss occurs (either the pixel is not captured because the serial (frame) buffer is full, or the (circular) buffer wraps and overwrites other data before it could be sent out over the wire).
Next, there is no function to read a DisplayPixelValue in the displays GRAM exposed.
Next, it can not be stored in 8K of SRAM, (240x320 is 172,800 bytes, 800x480 is 864,000 bytes)
Next, there is no file system defined by the firmware to say save it to Flash
Next, if it could be saved to flash, it is certainly a longer download over the serial Tx/Rx line (min 15 sec at 115200 baud for 240x320 and min 75 sec at 115200 baud for 800x840) If for any reason you couldn't maintain the highest 115200 baud speed, then double the times for next highest 57600 baud.
Interesting thought though.
I agree, I was iterating through some of the possibilities and limitations towards achieving such a goal.
Drawing is a possibility, though the approach needs to be modified. Your limiting factor is your touch press/release is limited in sending only the starting press point. Capturing a freehand "drag" gets removed from the potential functionality - no ending points and no points during the drag.
However, drawing with pixels, lines, circles and curves and arcs could be achieved, and you would have to program for such. One possible approach is such. Assemble a series of "buttons" along the outer edge - these will be for "pencil" to mark a pixel, "line" to draw a line, "circle" to draw a circle, etc.
Your programming logic:
if pencil is selected, the press/release point is the pixel drawn - the sendxy coord is sent over serial, you capture that point in your connecting MCU, adjust the bitmap on your connecting MCU, for the Nextion, draw x,y,x,y,color where x is the x coordinate value, and y is the y coordinate value. Adjust logic that pencil is complete and waiting for new points.
If the line is selected, you are looking for two points before you make that logic adjustment to waiting for new points. Again the first press/release point - the sendxy coord is sent over serial, you capture and store it, you have your second press/release point - the sendxy coord is sent over serial, you capture and store that. On your connecting MCU you would adjust your bitmap with the line from first to second. On the Nextion, line x1,y1,x2,y2,color where x1,y1 was the coords from the first press/release and the x2,y2 are the coords from the second press/release. Now you can adjust the logic on the Nextion that it is waiting for new points.
If the circle is selected, you are again looking for two points before you make that logic adjustment to waiting for new points. Again the first press/release point - the sendxy coord is sent over serial, you capture and store it, you have your second press/release point - the sendxy coord is sent over serial, you capture and store that. The distance between the first and second points need to be calculated - that is your radius value. On your connecting MCU you would adjust your bitmap with the circle where the first point is the center of the circle from first and the radius value. On the Nextion, cir x,y,r,color where x1,y1 was the coords from the first press/release and r was the calculated radius. Now you can adjust the logic on the Nextion that it is waiting for new points.
Continue through each type drawing you want to implement. Keep one button on your Nextion bar that will be to send when your drawing is complete, on your MCU, when that button is received - commit your bitmap to file. Save to the SD card accessible by your connecting MCU.
So from a can it be done, yeah, with some effort - just that you can not capture a continuous press drag.
PS: Implementing entire graphic libraries into the firmware of an MCU, probably could not fit. But programming capture, process and update works via a secondary mcu (the one that connects to the Nextion). Your choice of your second MCU will of course determine your limits. Obviously a RaspberryPi has way more resources to accomplish this than the Arduino Nano.
All the best,
I will request a little less condescension. At least I offered an approach that could function to an undescribed ambiguous goal, that may achieve a result while waiting for a request to be considered. The difficulty I see is providing common ground across the multiple devices. Where the larger versions have their graphics driver on the pcb is quite different than the smaller versions where the driver is embedded in the display. The other difficulty I see is limited capacity of the pcb mcu. I assume one could take the approach that the mcu be filled with software to the point there is no room to process the user designed HMI, but I would probably take the approach of only putting the necessary firmware on the mcu and not what can be user coded and driven from the user-mcu that is connecting to the display.
I think the resulting issue you will get if the function could be implemented becomes the delay between the time the HMI user has drawn the line, and when you can get the limited amount of data that the STM mcu was able to store in the buffer returned to your connecting mcu. On the smaller displays, the inaccessible ILI9341 graphics driver may record press when contact connections are made, but does it have the circuitry to even track movement, or does it merely record position where contact connection is lost. This would coincide with the touch/release events that we have in the Editor. I will further that with the interrupt capabilities of the STM32F0 mcu where the single interrupt can have a second "waiting interrupt" recorded if the linear programming of the second interrupt wont interfere with the stack pointer, otherwise is generally discarded or lost, there certainly isn't provisions for a third fourth and fifth -- and that is why when you are dragging your stylus is draw mode, you get skips.
However what is occurring in draw mode, as the behaviour shows is: that sendxy on contact is sent to the outgoing buffer and is then suspended to avoid a slew of instructions to record to the outbound buffer, and then the series of the 90-bit high-lows to send it over a N81 serial connection for each of the points captured and not lost to the cascading interrupt limitation described above. After the drag has been completed, and the connecting contact has been lost, the final point is sent to the buffer and sendxy resumes.
One point to consider in adding this extra series of commands is can the users-mcu connected over serial rx/tx lines even capture all of additional data this request is asking for as fast as the user is drawing it on the screen. What limitations do you impose on your user via software? What interval ms are you going implement where your mcu side software knows there are points missing and implements intuitively filling them in the missing points?
I will refer back to my point of common ground needed across all devices. I will assume with a 108MHz chip being put on the Enhanced larger versions that it may indeed be possible in the future on a more capable device. I may even go so far to jump at an assumption it will be a function of Enhanced and sadly may not be included in the Standard devices. I will point at the fact that "intuitive" is actually achieved on your-mcu that connects to the nextion display, achieved via your software programming -- and obviously a RaspberryPi or Intel Edison will have greater capabilities to achieve an intuition compared to an Arduino Nano or an Atmel Tiny85.
And finally re-reviewing "My Programming Logic" it may not be as far out to lunch as one thinks.
There is nothing wrong with your request. I am going to clarify my point that I was iterating through the potential barriers and approaching some possible ways one could begin to think at finding a solution that functions. I am going to relate to a "This doesn't work" question. Without more details, can you show what code you have, where is your HMI to take a look, what is the environment (Arduino, Intel, Other, what libraries, whose libraries) -- makes it hard to describe a one fits everything solution.
From the beginning of time, hardware circuitry only takes things to a point, and then software kicks in to create virtual circuitry to accomplish the tasks. I have been digging deep into the ILI9341 drivers (hey I don't even know which nextion display you have - it might have slightly different drivers and hardware) and its instruction set, coupled with the ST32F0 version of the Cortex M0 thumb 16 instruction set, coupled with nextion variables and nextion's instruction set, reviewing the binary output of the tft files to locate how you might be able to accomplish your goal. I have not completed my work, but some thoughts on an approach come to mind.
What if you were to include a timer set at the 50ms (minimum value for a timer) in your initialization and touch/release logic in your HMI, you maybe able to kick your touch release. I haven't personally tried it yet. You might have to store your current drawing variables into user-defined variables, brush color, etc, and then back to back, turn drawing to off, reinitialize the draw variables from your user-defined stored variables, reinstate drawing to on -- where your timer is reinitialized in the touch pressed logic, you might actually be able to simulate your effect. I'll point out that 50 ms is a max 20 times per second, and the logic may steal a few in reality, but you might be able to automate and realize 15 points per second.
The understanding of course is that you understand your user could outdraw that in a quick drag, and that it is dependent on the display driver interpreting the "press already in progress" when switched on as a new press event. If the driver requires its voltage is changed from a press/release on the actual screen, this approach will of course fail.
First, I need to clarify that I am not Itead staff. I was offering approaches and factors that need consideration that may possibly help you achieve your result with the existing standard version displays. Understanding that as a programmer, finding ways and methods to achieve a goal and push the hardware to its limits is what we do.
As you pointed out, since your original request 14 days ago, we have seen there is a new enhanced version coming that can be pre-ordered now. It definitely looks like it will have more capabilities, though I thought I saw 1024 bytes of space to save to, and that still hasn't addressed your feature request of being able to save to the flash chip.
May I recommend you open a support ticket. Where itead also provides custom pcb work, I have to assume there would be many options available (that are not limitations in the standard edition) for you to realize your goal. I would recommend that you discuss your specific requirements with them in a much more direct approach.
I am now reviewing all of the Feature Requests, this will take some time, patience please.
Request to have sendxy transmit drag points not just start/end - carried forward.