My Fellow Test Engineers,
If you are using Gen AI and LLMs as an aid to do better work, yeah, we should be exploring it. I have no second thoughts in it.
- Like a part of software requirement, video, an image, code or anything that hints what the system is?
- That is, is this part of your prompt text which you are giving to the Gen AI model(s) and other LLMs? If so, do exercise the below questions:
- Do my organization and its business see any threat and risk from me for doing so?
- Does it violate the NDA (Non-Disclosure Agreement)? If yes, how will I be impacted and prosecuted by the employer's business and organization?
- Will I be terminated from my job or contract for doing this?
- Will I have to face the legal consequences?
I tell you, for today's LLMs, it is sufficient to have gist of requirement or code or video or an image to isolate one's request. From there, it can monitor fairly better with additional anticipation.
My questions:
- How would you trust the models behind?
- Do you know how it is observing especially when you use your business identity to login and prompt?
- Though you have customized the LLM's model and it is in within your environment, how do you know it is not connecting to its vendor environment and updating the gist?
Let us leverage the prompt engineering, Gen AI and associated LLMs. But, be aware and conscious of what we are giving out!
Let us think about what we are passing to community by just saying to use prompts for -- to write test cases for this given requirement; read the given requirement and give a test design and strategy; review the code snippet and more. Such loose and vague messages can be harmful if it is blindly followed!
By prompting the text and calling it out as prompt engineering, we might be giving out what we are not supposed to in the context of our employer's work. Caution!